Sounds intriguing, but after reading through the website for 5 minutes, I still have no idea how this is supposed to work.
From the pictures, it looks like they've set up some kind of system to translate certain physical movements into changes in the projected images. It seems like a non-technical user can only make superficial changes within a limited set of pre-defined behaviors. But from the description it sounds like there's a lot more to it than that...
Right now, each piece of paper in Dymanicland is a program. You put a piece of paper down and it "runs" - the metaphor breaks down at this point though. With a little bit of code, anything in Dynamicland can become "alive" and interactive.
For me, the impressive part of Dymanicland is how physical everything is. Want to see how something works? Find the paper that the code is printed on. Need help understanding some code? Bring somebody over to the wall, table, or couch where the code is. You can annotate the paper, draw diagrams on it, cut it, fold it, tear it, put it in a book to help others learn how it works. Want to edit some code? Point an "editor" token at the paper program you want to modify, then start changing the code.
Dynamicland isn't a room with a computer, it's a room that is a computer.
> Want to see how something works? Find the paper that the code is printed on.
> Want to edit some code? Point an "editor" token at the paper program you want to modify, then start changing the code.
If you edit a paper program, then won't the source on the paper be obsolete? Will that be indicated somehow? If I'm looking to see what a piece does, how can I tell if the source code on it can be trusted?
> If you edit a paper program, then won't the source on the paper be obsolete? Will that be indicated somehow?
When a program is changed, Realtalk will print a red line over the lines that have been modified or removed. It looks a lot like the output that you'll see from a "git diff" on a file.
The projector is just a means of getting dynamic media. You could imagine future technologies involving e-ink paper that's as cheap as wood-pulp paper. What if we had e-ink embedded in paint? What if all physical objects could be embedded with computability?
That's good, then. Having the code right there isn't very good if you can't trust it, but if it at least indicates what's not to be trusted, that's much better.
To the down voters, it was a genuine question. If the programs are pieces of paper, either they are just toy problems or otherwise you care about losing the paper.
But... when does it run? How do you trigger it? How do you step through it with a debugger? How does one program call another program? This seems like a lousy metaphor.
I'm looking for more details as well, the idea fascinates me. Each program looks to be a code printout, so I'm not quite grasping how changing code works with just an "editor token".
The "editor token" is also a program that Realtalk recognizes. It's just a small program that says "I am an editor named X" - you then point the token at the code you want to edit and then use a text editor to make those changes. I've seen people use laptops, iPads, and other Realtalk programs to make the edits.
The important thing to keep in mind is that everything in Dynamicland is, well, dynamic. Right now, editor tokens are just normal Realktalk programs printed on paper, but in the future the might be a different physical object. Right now you edit the coded using a text editor, but in the future you might use a pen to write code directly on paper, or perhaps you'd re-arrange physical objects that represented blocks of code, etc ...
The printed code is sort of just a documentation -- the system isn't, for example, scanning it visually to decide what to execute. Rather, each card encodes a unique ID, and the running code lives on the network at that ID, so you can point your editor at the card, read the ID, and see and edit the code at that ID. The printout is just the code that was at the ID at the time of printing.
I think the color dots are some hash/key to a program. Whenever the camera sees the code in its FOV, the system executes the program. Also, the color code seems to serve as location tracking as well, so the program is projected on the physical paper.
I think you can make some more sense of this if you read the piece, and watch the video on laser socks [0]. That was a game designed at the CDG lab that is the pre-cursor to Dynamicland.
I've spent a decent amount of time looking at Dynamicland-related things, and it seems consistently grandiose + vague. Maybe it's just that the material they're putting out is more for investors than other hackers. Not sure. The language rubs me the wrong way though:
Dynamicland is a communal computer,
designed for agency, not apps,
where people can think like whole humans.
It's the next step in our mission to
incubate a humane dynamic medium
whose full power is accessible to all people.
Just take "designed for agency, not apps" for example. It's a nice rhetorical technique to contrast apps with agency like this, suggesting that apps somehow deprive one of agency and that Dynamicland can restore it—but it feels pretty disingenuous if you think about it for a minute. (There's a section on the page also titled "Agency, not apps", which brings me no closer to feeling that the use of 'agency' is justified.) In actuality each platform (traditional computing, Dynamicland) has tradeoffs in terms of what one's agency is applicable toward. I can see no reason to think Dynamicland is an increase in agency over traditional computing rather than a lateral (and perhaps complementary) change. I agree it's nice to use your body rather than just fingers, but unless I can do the same things with the 'alternative medium' (and there is no material I've found suggesting a level of generality even in the same realm as traditional computing), I'll complement my computer use with other physical activities and face the reality of fingers for input until a true alternative comes along.
I agree the problem they are addressing is epic, and their language is suitable for describing it, but from what I'm able to glean about the project, the solution so far is on the level of 'neat,' and maybe important for introducing new users to programming—but, if that's what it's for, just say it! Instead we see that the aim of the project is to 'reinvent computing for the 21st century'. So am I going to do my taxes with this? Am I going to use it write novels? To design 3D models? To conduct research? To build artificial intelligence? To discover new medicines? To call an Uber? Is there any reason to think it might attain the level of generality where it could at any point be an actual alternative to contemporary computing? Or are we supposed to take this on faith because of the people involved? How could this compete with similar systems designed in AR? It seems to me like there's more 'agency' in AR since a system like this could be replicated within it, but with AR we'd likely have a number of alternative 'physical computing platforms' like it (Of course not soon, but they're talking about a timeline of at least 50 years for this project).
It's definitely refreshing to be able to interact with 'real' objects ('physical' would be less rhetoric-laden), but I feel like the novelty and suggestive language here may tempt readers into forgetting how much power was gained through computation specifically because it didn't require fiddling with physical objects. It's a big deal, and I'm pretty sure that a legitimate alternative computational medium will require deep theoretical breakthroughs, of which I've seen no suggestion here.
There may well be something of value here, but if there is, this website doesn't communicate it. I'm strongly repulsed by the way this project is presented. It reads like a quasi-scam hatched by a pretentious art student. I wish they'd spared a paragraph to say what it actually is.
That would be one the most obvious thing to do with such a system, as writing a novel used to be a very spatial task, and current text editors are not good at helping writers organizing their novel, identifying patterns, etc.
> AR
This is AR, but with projectors —which is not novel in itself, as it has been explored in labs, during prototyping events like Museomix and by digital artists for several decades now. HP even tried to commoditize the technique with the Sprout all in one computer.
You are right that the next step is most probably to do without projectors and replace them with AR glasses.
But the manipulation techniques, the collaborative aspect need to be developed and iterated, be it with or without AR glasses, and my understanding is that’s the goal here.
I find a lot to agree with in this article, and it explains it pretty well, but the part about the Federal Reserve is a bit too simplistic. The value of money is determined by a complex interplay of factors, including government spending/taxation and international trade. The Federal Reserve has only limited influence on it via manipulation of interest rates. Nevertheless, the main point stands: the strength of fiat currency lies in the implied social contract that measures will be taken to ensure your savings today are not worthless tomorrow.
Looks very interesting! I don't completely understand how to use it (and I guess not everything works yet?), but the UI looks nice, and it looks like it has some really cool features.
I once had visions of a "code-blocks" interface like you're doing, but you've certainly gotten a lot farther than me on that front. I'd love to find out more about what you're doing and if any of my code can help (will send you an email). I don't actually have time to work on this project seriously, so I'm happy to share whatever experience I've gained so that it doesn't go to waste :).
> ... how do you determine who is considered a "scientist"?
In the same way we do now, using a system that would overlook people like Einstein, a patent clerk who hadn't finished his Ph.D., who published articles having no supporting empirical evidence. In fact, there was little concrete evidence supporting his ideas until 1919 (a solar eclipse), which would eliminate him from consideration for the "scientist" label in modern times.
Or Alfred Wegener, a mere meteorologist who had some crazy ideas about the continents floating around and who was shouted down by the real scientists during his lifetime (reliable evidence for plate tectonics only appeared long after his death).
Ironically enough, both stories support a foundational principle of science -- that evidence trumps eminence.
Not to say that the thing about men vs. women isn't problematic, but that quote is taken out of context. The suffrage part is a completely separate bullet point and isn't saying that women shouldn't have the right to vote as your quote suggests.
Also, the main point preceding the quote that men and women may have different natural dispositions is a reasonable one. He only oversteps by implying men are better leaders, a claim that may be defensible under some very specific definition of "good leadership" but is mostly just inflammatory in this case.
You are right, I didn't notice at first that I was conflating two different ideas. Thanks for the correction!
But it is a short step from "men have natural abilities and disposition toward leadership" and "people shouldn't have universal sufferage" to women shouldn't vote.
This seems like a tricky case because if the users and drivers are being completely rational, it's not clear that Uber's deception should have affected their behavior. Sure, the drivers expected to make X% of fares, but realistically they probably estimated what that would come out to be not from looking at customer fare data (in which case the deception would have materially harmed them) but rather by tracking what they or other drivers were actually getting paid.
Still, if the allegations are true, a culture where it is fine to engage in such active deception is not a good sign.
Also, it's really slow and wasteful to be doing bounds-checking for each voxel of an image that is almost completely not a boundary, whereas you can't avoid checking if each voxel is lit. :-)
It makes sense to control pedestrian traffic in some intersections, but it just seems like they're not even trying to optimize pedestrian throughput. What makes the lack of feedback for crossing buttons worse is the fact that many of them are so conservative about letting you cross that you question if they are working. I encounter a lot that will refuse to let you cross if the light is already green, even if there is still plenty of time left. Unfortunately, lowering pedestrian accidents looks great, while saving pedestrian time is hard to measure.
Absolutely YES. Sage is and will always be 100% open source. To ensure this, the GPL copyright is spread amongst over 500 people. Also, the software written by the company SageMath, Inc. is also completely open source (https://github.com/sagemathinc/smc).
This. $200 for 4 months for 25 users works out to $2 a month per user. You could double or triple it and still be less than the personal plan.
Those large multiuser plans probably come out of grants, departmental budgets, etc. So they're likely not all that sensitive to price. Pretend you were still in academia, and had found online some great cloud-hosted software you wanted to use in a course, would it change your / your department's decision about purchasing that software if covering 70 students for a semester cost $400 vs $800 vs $1000?
From the pictures, it looks like they've set up some kind of system to translate certain physical movements into changes in the projected images. It seems like a non-technical user can only make superficial changes within a limited set of pre-defined behaviors. But from the description it sounds like there's a lot more to it than that...