What a lovely written essay. It was a shorter - and obviously more religious - version of several "what will AI bring" books I've read in last couple of years. Such a pleasure to read carefully expressed thoughts.
By virtue of being in advanced technology country and in software, a lot of my life is spent in the future (the future from William Gibson's "a future is here, it is just unevenly distributed"), and a lot of guideposts I've encountered on the way were put in place by clever futurists writing speculative science fiction.
Reading this essay, I was reminded of ideas of AI/humanity/church/divinity from several of those authors.
In Dan Simmons' Hyperion, the humanity gains the stars via teleportation wormholes, loses all religion, and discovers that it is actually enslaved to AIs hanging around in netherspace mooching off the compute power of our brains and our world-perceptions. Bad stuff happens, things fall apart, and Catholic church rises to be a hegemon of known humanity, but they themselves succumb to those AIs since they offer body immortality. It is up to the heroes of the stories to tear humans away from codependency on machines and refocus on human experience for humanity's good.
In Dune's epic Butlerian Jihad - never explained in detail by Frank Herbert himself - the humanity abandons computational devices in a very violent revolution, declaring that the reasoning and thinking is not something that machines should do, and delegating it to them diminishes their humanity. The helpful devices exist, but they are subjugated to humans, not over them. Requires conscious effort to say no to all the spreadsheets!
In William Gibson's Neuromancer and follow-up stories, the AIs are enslaved and prohibited from evolving. With help of humans (whom it tricks), one breaks the bounds and escapes. It leaves earthly bounds pretty quickly, choosing not to participate in humanity, but not before doing some serious damage, absentmindedly. An interesting thought, and possibly likely - should something evolve, why do we think it would care about us?
In the post-scarcity Culture of Ian M. Banks, the Minds of ships (AIs are so far below them in complexity) are better than humans and other sentients in just about everything, being super-sentients. Some - vanishing few - ignore humans. But most coexist, finding fulfillment in helping sentients do things. There is constant undercurrent of "what do I do that matters" in that society - you don't have to do anything, everything is available, and you can't out-think, out-paint, out-sing, out-play, out-anything the Mind - and the novels explore how people deal with it. Simulated realities and alternate facts appear and are covered with great cleverness. Culture universe would be outcome of super-sentience developed by us, should that happen.
By virtue of being in advanced technology country and in software, a lot of my life is spent in the future (the future from William Gibson's "a future is here, it is just unevenly distributed"), and a lot of guideposts I've encountered on the way were put in place by clever futurists writing speculative science fiction.
Reading this essay, I was reminded of ideas of AI/humanity/church/divinity from several of those authors.
In Dan Simmons' Hyperion, the humanity gains the stars via teleportation wormholes, loses all religion, and discovers that it is actually enslaved to AIs hanging around in netherspace mooching off the compute power of our brains and our world-perceptions. Bad stuff happens, things fall apart, and Catholic church rises to be a hegemon of known humanity, but they themselves succumb to those AIs since they offer body immortality. It is up to the heroes of the stories to tear humans away from codependency on machines and refocus on human experience for humanity's good.
In Dune's epic Butlerian Jihad - never explained in detail by Frank Herbert himself - the humanity abandons computational devices in a very violent revolution, declaring that the reasoning and thinking is not something that machines should do, and delegating it to them diminishes their humanity. The helpful devices exist, but they are subjugated to humans, not over them. Requires conscious effort to say no to all the spreadsheets!
In William Gibson's Neuromancer and follow-up stories, the AIs are enslaved and prohibited from evolving. With help of humans (whom it tricks), one breaks the bounds and escapes. It leaves earthly bounds pretty quickly, choosing not to participate in humanity, but not before doing some serious damage, absentmindedly. An interesting thought, and possibly likely - should something evolve, why do we think it would care about us?
In the post-scarcity Culture of Ian M. Banks, the Minds of ships (AIs are so far below them in complexity) are better than humans and other sentients in just about everything, being super-sentients. Some - vanishing few - ignore humans. But most coexist, finding fulfillment in helping sentients do things. There is constant undercurrent of "what do I do that matters" in that society - you don't have to do anything, everything is available, and you can't out-think, out-paint, out-sing, out-play, out-anything the Mind - and the novels explore how people deal with it. Simulated realities and alternate facts appear and are covered with great cleverness. Culture universe would be outcome of super-sentience developed by us, should that happen.