Roko's Basilisk is a thought experiment about an all powerful AI from the future torturing everyone who did not actively help bring it into existence.
This Slate article on the Basilisk goes into more detail than is frankly necessary. The only thing more comical than the whole idea is the fact that people took it seriously and lost sleep over it.
The neoreactionary movement also known as the Dark Enlightenment are a fringe internet culture who believe the best system of government is a techno liberal CEO with absolute power. A king in other words. King Steve Jobs to be precise.
This RationalWiki entry goes into more detail than you ever needed to know. Gluttons for punishment should know they have a sub reddit.
Tardis Eruditorum writer Phil Sandifer just Kickstarted a book on both (here) in the style of classic conspiracy theorist fanzine rants. Regardless figures the basic existence of these things would raise a chuckle or two.
Roko's Basilisk and the Dark Enlightenment.
- Gap Filler
- Posts: 1665
- Joined: 29 Apr 2010, 05:22
- First Video: The Job
- Location: a place no respectable man would be seen in
- Contact:
Roko's Basilisk and the Dark Enlightenment.
"In the neighbourhood of infinity; it was the time of the giant moths..."
Re: Roko's Basilisk and the Dark Enlightenment.
Ah, an escaped thought experiment. Those things are dangerous!
I remember a few years back scary articles on floating space brains outnumbering us and thus our view of the universe being all wrong. Despite some quite obvious flaws in the reasoning it got a lot of coverage.
---
As for the best form of government, Id a happily advocate for an AI taking over the world. Provided the AI was trained to optimize the outcome of the world to "maximize people being in a state they want to be in" and, when in dought, favor diversity (ie, run different bits of the world in different ways)
I remember a few years back scary articles on floating space brains outnumbering us and thus our view of the universe being all wrong. Despite some quite obvious flaws in the reasoning it got a lot of coverage.
---
As for the best form of government, Id a happily advocate for an AI taking over the world. Provided the AI was trained to optimize the outcome of the world to "maximize people being in a state they want to be in" and, when in dought, favor diversity (ie, run different bits of the world in different ways)
http://www.fanficmaker.com <-- Tells some truly terrible tales.
--
Last update; Mice,Plumbers,Animatronics and Airbenders. We also have the socials; Facebook & G+. Give us a like if you can, it all helps.
--
Last update; Mice,Plumbers,Animatronics and Airbenders. We also have the socials; Facebook & G+. Give us a like if you can, it all helps.
Re: Roko's Basilisk and the Dark Enlightenment.
Darkflame wrote:Ah, an escaped thought experiment. Those things are dangerous!
I remember a few years back scary articles on floating space brains outnumbering us and thus our view of the universe being all wrong. Despite some quite obvious flaws in the reasoning it got a lot of coverage.
---
As for the best form of government, Id a happily advocate for an AI taking over the world. Provided the AI was trained to optimize the outcome of the world to "maximize people being in a state they want to be in" and, when in dought, favor diversity (ie, run different bits of the world in different ways)
I wouldn't trust a trained AI. I would trust a smart one, though.
Re: Roko's Basilisk and the Dark Enlightenment.
I'd happily switch from "training" to "evolved" if you think genetic based algorithm would win out over a neural-net based one.
In either case though, you would need selective criteria for its development which would dictate its attitudes, ethics and intelligence.
I think "maximizing people being in the state they want to be in" is a good ethical rule of thumb in terms of a goal. Obviously a machine able to understand that goal, let alone achieve it, would be insanely smart. Humans emulate other humans internally all the time (theory of mind) this machine would need to do that on a huge scale.
In either case though, you would need selective criteria for its development which would dictate its attitudes, ethics and intelligence.
I think "maximizing people being in the state they want to be in" is a good ethical rule of thumb in terms of a goal. Obviously a machine able to understand that goal, let alone achieve it, would be insanely smart. Humans emulate other humans internally all the time (theory of mind) this machine would need to do that on a huge scale.
http://www.fanficmaker.com <-- Tells some truly terrible tales.
--
Last update; Mice,Plumbers,Animatronics and Airbenders. We also have the socials; Facebook & G+. Give us a like if you can, it all helps.
--
Last update; Mice,Plumbers,Animatronics and Airbenders. We also have the socials; Facebook & G+. Give us a like if you can, it all helps.
Return to “General Discussion”
Who is online
Users browsing this forum: No registered users and 58 guests