Hacker Newsnew | past | comments | ask | show | jobs | submit | mcartyem's commentslogin

There have been a lot of posts related to garbage collection lately but none of them touched upon what I see as a crucial issue: why is garbage collection needed to begin with?

Could you do without it? What is the key point that made it necessary?

I'm aware of it being introduced by McCarthy in the original Lisp paper in 1960 of course. But I suspect what McCarthy originally meant is not what garbage collection turned out to be. What I suspect he meant was that there needs to be a way for memory to be managed. malloc/free offer a way for memory to be managed, and presumably they weren't invented until C was nine years later. What McCarthy might have meant is what became malloc/free in C, which doesn't need garbage collection.

C isn't the only flag here. Was there any OS on the IBM 704 used to implement the original Lisp? Did the OS support multiprocessing? Because if it didn't (UNIX wasn't invented until 1969 either) it would make sense for memory to be available for a single process. And it would mean when people said garbage collection they were envisioning malloc/free.

(Also, databases and operating systems can live without whatever makes garbage collection necessary, since they don't use it, and those are pretty complex and fast pieces of software.)

So, what makes garbage collection different than malloc/free, and why is it necessary? I'd love to learn more about that.


"why is garbage collection needed to begin with?"

Well, think about this: what would a declarative language like Prolog look like if you had to manually deallocate memory? At a high enough level of abstraction, it does not make any sense to manually deallocate memory; it may not even make sense to speak about "memory" at all for some abstractions.

"So, what makes garbage collection different than malloc/free, and why is it necessary? I'd love to learn more about that."

It is the same as the difference between having a compiler generate the instruction sequence for setting up stack frames, and writing those instructions yourself in assembly language. As a programmer, your time is probably better spent on higher-level things than how stack frames are being set up. Likewise with allocating and deallocating memory: you will probably be more productive if you spend your time on something else, like the design or logic of your program.

Remember, programming languages exist for programmers, to make us more productive. We are able to do more when we are not being distracted by things like figuring out when it is safe to deallocate memory or ensuring that we do not create memory leaks.


I don't believe Lisp was even viable without garbage collections given that lists are created and shared left and right, not to mention closures. Sure, McCarthy could have used "malloc," but making the right "free" calls would have been impossible.

Garbage collection is the main reason why lambdas were slow to be added to C++, as the objects they close over have indefinite lifetimes.

> Because if it didn't (UNIX wasn't invented until 1969 either) it would make sense for memory to be available for a single process. And it would mean when people said garbage collection they were envisioning malloc/free.

This doesn't make any sense, sorry. Manual memory management works well when objects have definite lifetimes, and fails miserably when they don't.


> why is garbage collection needed to begin with?

Computers don't have infinite memory. If they did, there would be no need for garbage collection. So if your program needs more memory than the system has, you either need to buy more memory, or figure out areas of the existing memory space that can be overwritten, or rewrite your program to use less memory. (Some embedded systems developers never use free() because they know their code is the only running code and know it probably won't use all the device's memory.) You can manually figure out and mark memory that can be overwritten by using free(), which can be error-prone if you accidentally mark a block of memory as available that shouldn't be. Or you can let the programming language runtime figure it out itself according to whatever algorithm (such as, simplistically, "this block of memory won't ever be accessed again in the running code, we can reuse its memory"). Garbage collection algorithms can be a lot more sophisticated and use memory-sharing and other things, but the point is to make memory usage efficient and not something the programmer necessarily needs to worry about.


I think one of the keys to understanding garbage collection is to understand that it is on a continuum of memory management techniques, and the line is a great deal less bright and shining than people often realize. malloc/free is "manual memory management", right?

Well, not really. Truly manual memory management is getting an enormous block of memory from the OS, and fully manually choosing what goes where within that block of memory. This is indeed a real technique used when the situation is dire enough. If you're not doing that, you're deferring something to your automated solution, the only question is, how much?

malloc/free is a combination that still gives you lots of power, but it's not perfect when you start pushing the abstraction hard enough. All malloc/free combinations have some sort of pathological case, where whatever allocation strategy they are choosing is wrong for some reason. That's why there isn't just one implementation, there's many, and some applications can see big wins switching, while others may see losses.

Garbage collection isn't really some sort of binary dichotomous decision vs. malloc/free; both are forms of automated memory management. Garbage collection techniques just step it up, and try to infer when you are done with memory. The disadvantage is, they may not be as smart as a human, the advantage is that they're a heck of a lot more consistent and pay much better attention. Then, even within "garbage collection", you've got a gradient; you may see a language like Rust with mostly deterministic memory management, but with an easy call-out to GC if you need it. You may see a language like Perl, with relatively consistent finalization as soon as an unshared reference leaves a scope, or you may see something that only collects during sweeps. At the far end you get imprecise garbage collectors, such as those used in Go right now (though they are working on it, and have already made a lot of progress), so even within the realm of GC there's a range of precision vs. speed vs. nuances the programmer needs to know to use them.

GC is necessary because one particular point on this large spectrum isn't the right choice for every program. It isn't even the maximally manual solution. In fact, there's even some middle grounds between malloc/free and fully manual memory management, such as arena allocation. There's a lot of fine gradation on this scale.

"(Also, databases and operating systems can live without whatever makes garbage collection necessary, since they don't use it, and those are pretty complex and fast pieces of software.)"

A bold statement. Have you ever heard the term "compaction" used within the context of a database? That's a form of garbage collection. Contrary to what you said, almost all databases have some form of garbage collection. (Probably all the ones you're thinking about.)

As for whether operating systems have garbage collection, it depends on your precise definition of "operating system". Kernels may lack it, but as critical as they are, and as much sheer staggering code they may have for drivers and stuff, conceptually they actually aren't that complicated, compared to a lot of userland things. Broaden your definition and you'll find some form of garbage collection appear again. And if you include "reference counting" approaches as a form of garbage collection, the Linux kernel uses that all over the place. Is reference counting a form of garbage collection? Well, it can actually be a tough call... because it's all on a continuum.


(thank you for the willingness to write such a detailed response)

There exists a point on the memory management continuum where management starts being more automatic (gc) than manual (malloc/free). I would like to understand the forces surrounding this specific point, right before the scale tips towards automatic.

If you tried to build a dynamic language without automatic management, what would break and why?


The biggest problem is scope control; as you start having closures that get passed around freely, those closures drag values along with them that you can't collect. It is not impossible to write this with malloc/free, but I've played that game and it's not very fun. And remember, what seems easy in one little blog post isn't easy in a real program where you've got dozens of the little buggers flying every which way. (And by dozens, I mean dozens of distinct different types of closures from different sources, not just dozens of instance of the same code instance.)

Many of the dynamic languages fling closures around with wild abandon, often without you even realizing it. (One object has a reference to another object which is from the standard library which happens to have a closure to something else which ends up with a reference to your original object... oops, circular reference loop. Surprisingly easy.)

There isn't much technically impossible with malloc/free (though IIRC there are indeed some cases that are both sensible and actually can't be correctly expressed in that framework, but the example escapes me), but there's lots of practical code where the cost/benefit ration goes absurdly out of whack if you're trying to write the manual freeing properly. It's hard to write an example here, because it arises from interactions in a large code base exceeding your ability to understand them. It's like when people try to demonstrate how dangerous old-style pthreading is; even though the problem is exponentially bad in nature, anything that fits in a blog post is still comprehensible. The explosion doesn't happen until you got to real code.


I can see how closures would take some work. Thanks.

There's evidence garbage collection was not the desired solution but a plan B. McCarthy writes the reason reference counting was not implemented was a hardware limitation [1]:

"Since there were only six bits left in a word, and these were in separated parts of the word, reference counts seemed infeasible without a drastic change in the way list structures were represented. (A list handling scheme using reference counts was later used by Collins (1960) on a 48 bit CDC computer).

The second alternative is garbage collection..."

[1] - http://www-formal.stanford.edu/jmc/history/lisp/node3.html#S...


If the language was also dynamically typed, that could also lead to some interesting problems. What is the actual size used to represent a string, or an in, or a double in the language? What happens when you want to compare/convert different types?

Either the language provides these capabilities, in which case it's doing some memory management of it's own, or you write them, in which case it wasn't necessarily all that dynamic to start with.

I imagine it might look like what you get when you try to write a dynamic language in C (Perl, Ruby, Python), a lot of macros that automate what's going on to the degree you are mostly writing some macro based DSL.


"[T]he advantage is that they're a heck of a lot more consistent and pay much better attention."

This implies that the only reason for garbage collection is programmer fallibility. However, I think there are some cases where there is literally no better way of knowing when everyone is done with a piece of memory than by checking (closures strike me as an area this can happen pretty quick), at which point the only solution is automated garbage collection - be it provided by the runtime, a library, or written by the programmer for the particular program.


> This is indeed a real technique used when the situation is dire enough.

Like console video games. Rarely there is malloc/free type of allocations, and most of them are for some third-party API, and often there are ways to avoid it... or the system itself (sometimes C CRT would do that beneath you, sometimes obvious - strdup, sometimes almost - fopen/fclose, and sometimes unexpected - for example certain *printf functions might do it).


free is just a way for you to tell the C runtime that you are done using a piece of memory and that it may be re-used to handle another malloc request. Of course, the programmer has to know that he is done with a particular piece of memory, and it can be very difficult to determine at what point that is the case. GC just removes the need to call free - some algorithm runs around and determines what memory you are done with, and returns it to the free store for you. In essence, GC allows you to operate under the illusion that you have unlimited memory.

[ignoring destructors/finalizers for now]


Well, if you have data that can be referenced from more than one spot (for example, an object pointer that is a member of multiple lists), you need avoid calling free until the last reference is gone. One way to track that is to use an incrementing and decrementing reference counter, but then you end up leaking memory as soon as you get cyclical references. A garbage collector solves the problem of cyclical references.

In some cases a garbage collector can turn out to be more efficient too, since there is no need to spend time carefully managing reference counts! :)


It's the ownership problem that makes garbage collection useful. Tracking who owns which piece of memory at a particular point in time is a lot of work. Every API has to document if and how an object returned as a pointer should be freed and if an incoming pointer continues to be owned by the caller or not. Every single library invents its very own memory management pattern and you have to keep them all in your head or bad things will happen.

Look at libpq for instance. There are several functions that free memory and destroy different types of objects: PQclear, PQfreemem, PQfinish, PQconninfoFree and maybe others that I forget. For some API functions the documentation explicitly states which free function to use, for others it says nothing and for yet others it says not to free anything because another object retains ownership.

C++ tries to solve the problem that libpq has by using RAII and destructors, but it opened a can of worms. Now you have to deal with tons of different types of smart pointers and fancy memory management patterns coming from different libraries and frameworks. And you must never think that smart pointers are really pointers because if you do, bad things will happen:

Can you return the this pointer from a member function? Not if someone else holds a shared_ptr to this. Can you return shared_ptr<ThisClass>(this)? No, because now you may have two different reference counts for the same object. Can you return shared_from_this() after inheriting enable_shared_from_this? Only if at least one shared_ptr already points at this, which depends on how the caller created this. Is SomeClassPtr a SomeClass*, a shared_ptr<SomeClass> or maybe a boost::intrusive_ptr<SomeClass>? Go find the typedef.

So garbage collection is desirable. Unfortunately it seems to be a very difficult problem to solve in the face of large amounts of memory and lots of parallelism, which is exactly where we're headed. I wish Oracle would buy Azul and make their garbage collector available in OpenJDK. The world would be a better place and I would forgive them all the nasty stuff they have done recently :)


> What is the key point that made it necessary?

Returning complex objects, without requiring them to be manually freed by whoever picked them (caller).

Also in certain languages - garbage collection actually allows saving of memory. If all strings are immutable, then "one", "one" and "one" might be stored in one place, and whoever asked for them would get the same place (you'll also get the benefit of using only pointer comparison, instead of strcmp). For this gc is needed - e.g. you return things, but you don't specify who owns them. It's just that much later an "agency" starts looking from the certain roots to track down which things are still owned, and which not.

For example - cherry tree - start from the root - all cherries on the tree are owned, all on the ground are no longer - they can be collected. Poor analogy, but it might work for some people.

One more point - if you have 1,000,000 mallocs and then frees then this might be much slower than having 1,000,000 allocations through a garbage collector and freeing them.

In the former case there is this form of "micro-management" - e.g. you have manually micro managed every allocation out there, and then you've had manually micro managed to free it.

Instead the garbage collector might free everything in few sweeps. Also the allocation with garbage collector can be sometimes as simple as moving a pointer ahead.

In the past CGI services were notorious for being slow when shutting down the process. A big portion of this was freeing the memory. In fact you don't need this, when a process is shutdown, no memory needs to be freed - e.g. this is a bit like ad-hoc garbage collection.

Another example was Pascal - there were mark/release regions. You can mark something, and then with release you would free all blocks at once allocated after the mark.

Such things are useful, when you don't want to suffer from too much malloc/free, but would like to keep sane non-gc collection scheme.

And lastly reference counting - it's another form of garbage collection/manual freeing, and it's used quite a lot. There is one problem - most of the implementations cannot handle cycles, and all of them suffer the penalty of dealing with the internal counter, which is even worse when more CPU's have to touch it.


This might be a silly thing to ask, but why don't they save their data in flat files on a shared filesystem?

They wouldn't need Memcached, since the OS caches files anyway; replication with rsync becomes easy; they don't need transactions anyway; and they wouldn't have so much software to manage.


Storing, tracking and managing billions of tiny files directly on a file system is a nightmare


What about it is a nightmare?

Isn't storing, tracking, and managing billions of entries done directly in a database anyway?


Its a real pain when you want to inspect the files, delete or copy them.

Try taking 300,000 files and copy them somewhere. Then copy 1 file which has the size of the 300,000 combined. The single file is MUCH faster (its also why we usually do a tar operation before copying stuff if its already compressed). Any database that's not a toy will usually lay the 300,000 records out in a single file (depending on settings, sizes and filesystem limits).

The 300,000 files end up sitting all over the drive and disk seeks kill you at run-time. This may not be true for a SSD but I don't have any evidence to to suggest this or otherwise.

Even if the physical storage is fine with this I suspect you may run into filesystem issues when you lay out millions if not hundreds of millions of files over a directory and then hit it hard.

I have played with 1,000,000 files before when playing with crawling/indexing things and it becomes a real management pain. It may seem cleaner to lay each out as a singe file but in the long run if you hit a large size the benefits aren't worth it.


In addition,

It doesn't have good querying utilities. You'd have to build your own indexer and query engine. Since you can't put billions of files in a single directory, you'd have to split them into a directory tree. That alone requires some basic indexing functionality and rebalancing tools (in case a single directory grows too large or too small). This is without any more sophisticated querying capabilities like “where X=val” where X isn't the object ID/filename.

Write performance is going to be very very horrible.

Existing tools around managing, backing up and restoration, performance optimisation and monitoring aren't suitable for handling huge number of small files as well as a subset of them (give certain criteria related to the data itself)

You could build specialized tools to resolve all of these issues, but in the end, you'd end up with some kind of database after hundreds of man-years anyway.


The word innovator might be ill-suited to support the argument that mastery generally takes time.

For example physicists and mathematicians do their best work in their early twenties. Arguably they get worse with age.


I was curious as to whether this was true, so to test it out, I looked at Hilbert's Problems[0], which are widely considered some of the most important problems in Mathematics in the 20th century. Of the solvers whose ages I could find on Wikipedia, the median age was 30.5 and the mean was 29.4. (For Hilbert's 10th problem the solution is listed as the joint work of 4 people. I treated them as one person with an average age of 42.5)

This obviously isn't statistically significant, but it lends some weight to the myth that Mathematicians "die young."


This is great research, thank you.


You see the effect in math because math requires the sharpest most complex pure thinking. Other major creative efforts (business, movies) involve collaboration and social/organizational structures that benefit from years of relationship-building and reputation-building. Or years of drudgery, if your innovation is something like the world's first dictionary.


Many jobs are crappy because companies want guarantees. They would rather have peace of mind that things are moving at a known rate compared to any risk of uncertainty.

The jobs are not necessarily crappy because companies are interested in keeping wages down. Companies are more interested in boosting profits compared to keeping wages down.

The humorous part is so many jobs continue to remain crappy even when you make a boss look good by generating orders of magnitude more value than he bargained for when hiring you.

Even if you explained to the boss how you could generate such value at the time of hire, they still wouldn't be able to assimilate it's possible. They don't see how one can generate such much value, even when you present plans on how it can be done.

It's as if the boss is trying hard not to let you generate more value than expected.


Perhaps unbeknownst to you, you suggested that languages can be ranked, and that one of them stands out on top as best.

You are almost right. There is one language that is the best, though it's so young it doesn't stand out yet. This might be the only time in history people still have the chance to contribute to the best language.

That is, Arc: http://arclanguage.org


How about: what's the most important thing people refuse to accept?


Mine is compilers. Which is another way of saying: automating; reducing the work I have to do manually.


I have a question about it: can anyone tell how it deals with redundancy?

It seems to be distributed. If you are running Plan 9 on 3 machines, and one goes down, what happens to the rest of the system. Does it become unavailable?


An important variable not considered by this solution is who the author of a comment is.


Yes, I'm sure this has been thought about before, but what would happen if we threw the idea of everybody seeing the same order out the window? The comments upvoted by people whose comments you have upvoted would have added weight for example, even to the 2nd, 3rd, nth degree. There's a whole bunch of questions that spring from this, I think, and you wouldn't have to go all-in on it, but I think it could be an interesting exercise.


Catering to people with special needs like the deaf & blind has to be one of the most untapped opportunities for a startup.

The angle in is to build something the blind can use that frees them from the difficulty of browsing company websites. Maybe a meta-browser. Maybe something else.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: