Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Can’t Programmers Be More Like Ants? Or a Lesson in Stigmergy (2015) (acm.org)
77 points by RKoutnik on Jan 8, 2016 | hide | past | favorite | 27 comments


Stupid, stupid, stupid.

Ants do not construct any high-level map of the terrain they are working in or plan of what to do. Every action of every ant is strictly local. The author seems to hold this as a virtue, but it's well established that building a large program that way leads to an unmaintainable mess at best, and complete failure at worst. It can perhaps work for smaller programs in well-understood domains, and where the overall architecture is clear to everyone, but anything large, novel, or inherently difficult must be designed.

Contrast this brilliant Peter Naur essay someone linked to the other day: http://www.dc.uba.ar/materias/plp/cursos/material/programmin...


Individual ants do not. The colony as a whole, however, does, through virtue of the shared stigmertic responses, which is forgivable because those responses have been selected for because they result in that structure.

People, of course, can do better - we can map out the terrain, come up with a plan, and then follow swarm methodologies.

In this kind of situation, swarm methodologies look like recursively applied minimum-viable-product development: Look at the situation. Determine the smallest unit of change to produce meaningful results. Make that change. Repeat.

Do this at all levels of the organization and code base: from the top (the people making the map and the plan) to the bottom (an hour's worth of code).

The essence of stigmertic coordination is that I respond to the situation as it is, rather than the situation as I desire it do be ("the plan" - how I desire the situation to be in the future), and that when I finish a task, I re-evaluate the situation as it is, rather than proceed along a set path ("the plan").


Did you read the article? I'm not sure how e-mails, comments and code reviews, all given as examples of stigmergic markers, constitutes strict locality. The author is advocating for nothing of the sort. It seems like they are arguing for the communications to be closer to the code (and github is a good example of how this works well).


There may be some valid content to the article, but using the behavior of ants and termites as a guiding metaphor strikes me as so totally wrong-headed that I can't get past it.


There's still a lot to learn from stigmergy. That synchronization can mediated through signals/state from a shared environment. Evolution probably uses tho mechanism more widely than we currently understand. To speculate, just think how human mood is affected by global chemical levels such as serotonin.


> It can perhaps work for smaller programs in well-understood domains, and where the overall architecture is clear to everyone, but anything large, novel, or inherently difficult must be designed.

The disciplines of economics and biology might take exception to this.

http://fee.org/freeman/i-pencil/

> Actually, millions of human beings have had a hand in my creation, no one of whom even knows more than a very few of the others. Now, you may say that I go too far in relating the picker of a coffee berry in far off Brazil and food growers elsewhere to my creation; that this is an extreme position. I shall stand by my claim. There isn’t a single person in all these millions, including the president of the pencil company, who contributes more than a tiny, infinitesimal bit of know-how.


As intriguing as some of these parallels are, I think one should be wary of analogies that try to explain one poorly-understood thing (in this case, the process of software development) in terms of a thing that's understood even less (the way ants act).


Aren't we already doing this? Issue trackers, project wikis, and TODO comments in the code are all stigmergic practices.


Human beings aren't great at the whole hive mind thing. We miscommunicate, have opinions, argue, and do various other stupid things. Programmers will never work like ants unless the entire dev team consists of a star-wars like clone army.


http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1933734

The Superorganism Account of Human Sociality: How and When Human Groups are Like Beehives

Abstract: Biologists call highly cooperative and socially integrated animal groups such as beehives and ant colonies “superorganisms”. In such species, the colony acts like an organism despite each animal’s physical individuality. This paper frames human sociality through the superorganisms metaphor by systematically reviewing the superorganismic features of human psychology. These features include: (1) mechanisms to integrate individual units, (2) mechanisms to achieve unity of action, (3) low levels of heritable within-group variation, (4) a common fate, and (5) mechanisms to resolve conflicts of interest in the collective’s favor. It is concluded that human beings have a capacity to partly and flexibly display each of these superorganismic properties. Group identification is a key mechanism that activates human superorganismic properties, and threats to the group a key activating condition. This metaphor organizes diverse aspects of human psychology (e.g., normative conformity, social identity processes, religion, and the “rally-around-the-flag” reflex) into a coherent framework.


So contributors to a code base are a super organism? Interesting


Ants don't need to achieve return on investment for the ants in business suits.


...and I was naïve enough to think that programmers will be compared to artists...


Interesting, I never thought about this. Ants can know what to do next without to think, they just continue what was done by another ant.

In some way this article remember me that, in software, if we have continuos delivery, where we can see the current state of a software, we can be the next ant ant produce better product, like an ant seeing what was done by other ant, and continuing.

Software is abstract. It's a lot more easy to continue, like being the next ant, when you see the automated tests and the result of an hot deploy, these are fast feedbacks.

PS: Why people are feeling offended?


Leaf cutter ants take suboptimal paths when the pheromone trail is strong enough. This implies a folly of the crowd, which definitely is not something we should aspire to follow

http://www.lse.ac.uk/newsAndMedia/videoAndAudio/channels/pub...


Ants are actually quite inefficient workers. Half the time they are going somewhere pointless. But that's an acceptable evolutionary trade-off for them and as a whole they can still survive effectively .

I don't think it is helpful to cast email interaction, comments, phone calls, documentation, and webcast meetings as mere 'markers to achieve stigmergy'. Human beings are capable of far more complex, far higher bandwidth communication than ants, of which all of these are examples. We can, for instance, inform others of a problem and simultaneously let them know someone is already working on it. For the same problem, many ants will flock towards the problem, even if it turns out to be a false alarm or something quickly solved by the first ant to arrive or ...

Humans have the ability to immediately convey information about far away (in space or time) places to far away places and to immediately update that information.

Perhaps there are things to be learned from ants, but this article does not make it clear to me what that is.


This is pretty much the whole point of test-driven development. Once the tests are the authoritative source of truth for what the code is supposed to do, knowledge of the system in the mental-model sense becomes almost superfluous: all a buck-newbie coder needs to do is find out why some test failed and make it pass. Of course you can't sneeze without writing (perhaps lots of) test code, but hey, the company's stock price is no longer beholden to snobbish devs who monopolize the mental models of the code. Hallelujah!

Like open-plan offices, the benefits to management of TDD cannot be overestimated when considering what drove its adoption.

Also I get the feeling this guy has never read The Mythical Man-Month. Top-down development following a strict waterfall model? Brooks shows that even inside stodgy IBM, no one did that if they wanted to get work done (as opposed to producing reams of paperwork in order to satisfy auditors and assure them that work is getting done).


Programmers can always find ways to improve a codebase at the technical level. And they can think up new features. So they can keep working on something as long as you'll let them. But is the 'marker' of a TODO comment buried in a million-line codebase going to seem more important than a programmer's vision of a cool new thing or rewrite? What about prioritizing the new features that could actually get new clients/sales? Or prioritizing fixing the pain points that normal users experience but programmers don't notice? What if the next TODO is going in a good direction technically, but not the right direction from a business standpoint?

There might be something to this idea for non-critical hobby projects, (wasps build nests for their own use, not to sell) but it seems somewhat limited.


Says the guy who relies on Linux.


>Software structures underwent a radical change as their development teams became distributed across time and space, as shown in Figure 1b. Instead of a centrally planned, top-down, linear structure, code structure became network-like. Shortage of programmers, lower labor costs in emerging countries, and lifestyle preferences of the extreme programmer all contributed to this tectonic shift.

That paragraph is not explicit on whether he's talking about open source collaboration vs corporate distributed teams but since his references include FLOSS and "social coding" on github open software, I'll assume "open source" to interpret the following 2 paragrpahs:

>Stigmergy ensures tasks are executed in the right order, without any need for central planning, control, or direct interaction between the actors performing the work. [...] Markers make stigmergy more efficient, by more reliably focusing a programmer’s attention on the most relevant aspects of the work that needs to be done.

These are interesting ideas but I think any benefits of markers[1] are drowned out by the desires of programmers who are not beholden to a manager's agenda. Programmers volunteering to contribute often work on what is interesting to them.

It's very unlikely for a volunteer programmer to wake up with the irresistible desire to "hunt for memory allocation bugs in OpenSSL to prevent heartbleed". On the other hand, a Microsoft manager can direct one of his programmers to add a shim to Windows Vista so Quicken 2005 is compatible and future customers upgrade without fear. That type of work is drudgery but he's getting paid a salary and doesn't pick his tasks at random.

A stronger lever than "pheromone markers" in source code is sponsorship. E.g. a corporate entity finds value in an open source project but there are some gaps that programmers are voluntarily working on. The business entity pays a salary to programmers to prioritize that work higher and get them done.

[1]Don't know about Tesseract but it's probably something beyond sprinkling markers such as "//TODO: optimize this function is next highest priority" throughout source files or overhauling github to have a leaderboard of "important things to do next"


>It's very unlikely for a volunteer programmer to wake up with the irresistible desire to "hunt for memory allocation bugs in OpenSSL to prevent heartbleed".

Honestly, I find that kind of work fun. The problem is one of incentives, in that features are more highly valued than unknown bugs.


we're already drones. If i wanted to be an ant, i'd join the military.

>lifestyle preferences of the extreme programmer

talking about stereotypes, fashion and fad.

Though the article does state one point which i've always found a funny - software teams/organizations whose job is to build/design complex/network/distributed systems fail to see that they themselves are subject to the same basic rules of complex systems.

Like for example "latency vs. bandwidth" - software organizations everywhere are trying to introduce SCRUM - "latency at all costs" process - in order to increase productivity, ie. bandwidth, and they find themselves in such a deep surprise when productivity is actually falling as result of SCRUM.


> we're already drones. If i wanted to be an ant, i'd join the military.

Author is talking about ants as they are (which are really awesome!) rather than ants as a poor metaphor for mindlessness.

> lifestyle preferences of the extreme programmer

As my job duties have gotten more intense and I've needed to become more productive (I say while arguing on the internet at work...) I've really come to appreciate the utility of preferences. As you need to do more, better, faster, you start figuring out the little things that maximize yourself so that you /can/.

When I see SCRUM "fail" in that productivity goes down, it's not that teams produce less, it's that they fail to produce what's actually valuable. The 10x engineer doesn't write 10x the code, they write the code that's 10x the value.


>> we're already drones. If i wanted to be an ant, i'd join the military.

>Author is talking about ants as they are (which are really awesome!) rather than ants as a poor metaphor for mindlessness.

it is exactly the same. Just painted with different sentiments. Self-organization by the way of deciding what to do just based on the current state of the local context is exactly the military way - you carry your order without taking in and analyzing the global context of that order. I.e. mindlessly. It is a _good_ thing there.

>When I see SCRUM "fail" in that productivity goes down, it's not that teams produce less, it's that they fail to produce what's actually valuable.

yes, that is one of the ways SCRUM kills productivity by design - many valuable things in software can't be (or much harder to) produced while adhering to the strict latency requirements of SCRUM and thus people/teams produce what they can instead while staying inside those latency limits. Again, it is basic system analysis - decomposing system and introducing interfaces in unnatural places just to meet the artificial requirements of the process weakens the system and leads to degradation of it. As i said - it is funny how supposedly professional system designer/developers, ie. software engineering teams and orgs, fail to see it in many cases.


> Self-organization by the way of deciding what to do just based on the current state of the local context

Isn't this only a problem because the "local context" is limited? Humans can handle really large "local contexts" (otherwise, who'd make the plan in the non-stigmertic organization?), and we can add a big set of time-dimension information to that context ("here's what we're trying to achieve and the information we have about best doing that")

In the end, swarm methodologies are about empowering individual agents with intelligence and information such that they "inherently" coordinate and the result of that coordination is the desired, overall effect. "Hire good people, keep them informed, and get out of their way."

But yes. The downside of things that focus on rapid iteration (which is how stigmertic stuff works) aren't good at large, single-chunk problems: if you can't iterate your way to a solution, these methods won't work for you.

There are some problems and individuals that are well suited to multiple agents iterating closer to an optimal solution, and there are problems and individuals that aren't. The world has me convinced that technology development is the latter and product development is the former: it depends on how quickly I need to adapt to changes beyond my control.


> swarm methodologies are about empowering individual agents with intelligence and information

man, it is gold. Sounds like a great pitch to lure into the swarm... Yet it is completely contradictory. Swarm is called swarm because it consists of insects. For swarm intelligence to work, individual intelligence must be dumpened, almost turned off, and the information dispensed to an individual must be minimal to avoid his/her swerving off its prescribed course of action that the intermediate or global plan/context may be heavily depended upon. It is like with a bricklayer or a soldier on the mission - him seeing and thus possibly trying to optimize the global plan especially by altering his mission/plan or its place/time inside the larger may affect his carrying over of the mission or the bricklaying scheme and thus endanger the bigger context.

>Isn't this only a problem because the "local context" is limited?

it isn't a problem. It is a feature.

>Humans can handle really large "local contexts"

not really. This is why we have abstraction process.


Oh man I experience totally the opposite. It's a trope on here, but, go to Burning Man. It's almost a definition of a human swarm, but it works specifically because everyone is empowered, informed, and relied on to resolve issues involving themselves.

I also a see a parallel in how (at least one) Japanese company does their robotic assembly line: They have one or more human-operated lines, because if you don't know how to do it well, how can you teach robots how to do it well?

There are lots of reasons you're wrong. One is that the situation you've described still requires someone to understand what the swarm agent needs to do - why not have that be the agent, as well? They have the most up-to-date and relevant information about their situation.

Another reason you're wrong is that "swarm" doesn't consist of insects. The first software swarms were actually birds.

Another reason you're wrong is that while swarms allow for success through mass action of dumb agents, the swarms actually improve as you improve the individual agents - compare the simplest ant-algorithm applied to network connectivity with one wherein the individual agents mutate and learn. Second one does a better job.

...Another reason you're wrong is that swarm agents don't have "paths" picked for them. That would (for example, in ant swarm path finding) actually prevent the swarm processing from occurring. Rather, the agents apply principles (in software, an algorithm) to the situation they're in. In other words, the global plan is in fact relying on the individuals "going off course" and trying to improve the overall situation.

This all actually reminds me of both "Why I Like Java" and "Conservative vs Liberal Programming" - in that it's basically about: Do I prevent bad actors from doing harm, or do I empower good actors to make more good?

If one bricklayer going off-course in detrimental way derails your plan, you don't have enough good bricklayers. Fire the bad ones and/or hire more good ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: