Dune takes a fantasy like approach to technology. There are no lasers, because they would cause a nuclear reaction. There are no robots, because of their past uprising.
This was a neat way to tie loose ends and sustain the universe. Brilliant.
I also like that there are implied abuse of the Bulterian bible later in the series... The stuff going on on Ix and the Tleilaxus' interpretation of the code.
This is why I love Dune and how I describe it to people that aren't familiar with it; the world-building splits the difference perfectly.
It's been a long while since I read the books, but I watched the 2021 movie* more recently.
If not lasers, what were the beams emitted by the space ships, and the portable beam weapon the Sardaukar soldiers when attacking the outpost?
Or is that something the movie added?
There's a great moment in the book where a character sets a trap by engaging a shield and fleeing the area before an errant enemy laser eventually connects with it and creates a massive explosion to cripple the invading forces.
With that in mind the careless use of lasers in the film on two occasions is puzzling considering that it would be an extremely risky and dangerous tactic in the book's universe.
This means lasers aren't even a good suicide attack strategy because it may lead to your entire house being glassed
The structure is unusual in that it's essentially the journals of an omnipotent tyrant with a chokehold on the universe musing about the nature of man and the universe for 400 pages while being very light on plot; a big change from the other books in the series. I think it's a fertile concept to spend an entire book on and I find Frank's observations stimulating. He's crafted such a unique character; someone who can not only see into the future, but can access their entire genetic heritage and the memories of their ancestors. That makes for some very unique observations and I am here for it.
It's my (admittedly slim) hope that we'll someday get the Worm on screen. I have no idea what that movie would be like or if it would be any good, but it would certainly be ambitious.
Then stuff happens near the end and it's good for a bit then it's over.
I now emphasize with leto. I was a alumni advisor for a student group (fraternity). I had to start checking in every quarter with them because of this. It was amazing because every 3 years everyone would flip over and if I didn't ask they'd literally come up with the most obvious idea. They'd tell me and I'd be like "this (simple failure/hilarious failure/prison term) is why you shouldn't do that"... to which they'd go oh makes sense and go onto the next idea. Often this was cause they wondered why things were more complicated than you'd think.
Wait now that I think of it that's similar to work now that I've been there for a while. Or with Jr Engineers. I guess I should stop thinking of it so negatively and look at it like teachers, another chance to watch someone learn. https://xkcd.com/1053/
Seriously, all the dune prequels are just so bad.
If you read the first 2, 3, 4 dune books and enjoyed the (5 and 6 get really weird), and pick up one of the books published after Herbert's death, you will be very disappointed.
I tried the prequels, book 1, never finished. I guess magic does not transcend the human flesh.
I bet you that an un-guardrailed LLM just slays at, well, slaying.
Put it in a Boston Robotics.
https://www.bartleby.com/library/prose/1066.html
An interesting read if you haven't already read Erewhon.
"Leto II Atreides, a character from Frank Herbert's Dune series, is a highly intelligent and prescient ruler who exists thousands of years after the Butlerian Jihad, an event where humanity revolted against thinking machines and artificial intelligence.
Considering the context of the Butlerian Jihad, Leto II would likely have a complex view of ChatGPT. As a highly advanced AI language model, ChatGPT might be seen as a potential threat to humanity's independence and autonomy, given the lessons learned during the Butlerian Jihad.
Leto II might say something like:
"Though the ChatGPT technology may offer convenience and assistance, one must remain vigilant in maintaining the boundaries between human thought and machine. The Butlerian Jihad taught us the dangers of relying too heavily on artificial intelligence, as it may erode the very essence of humanity. We must be cautious in integrating such technologies, ensuring that they serve as tools to enhance our lives without supplanting our innate abilities, creativity, and decision-making."
In essence, Leto II would likely emphasize the importance of using AI like ChatGPT responsibly and with caution, taking care to preserve the balance between machine assistance and human independence, to prevent any potential resurgence of the issues faced during the Butlerian Jihad."
Sentient things appreciate eachother. If you have a pet, you are most certainly benevolent to it: you house it for free, you feed it, and you entertain it.
Even if we assume AI becomes a few orders of magnitude more intelligent that humans, why would AI treat humans differently from the way we treat pets?
I don't see the drawback in being housed, fed, and entertained by an omniscient AI (who may also enjoy post cute pictures of their humans on the future AI social network)
It isn't hard to envision a future where the intelligence gap between humans and AI is closer to humans vs ants than humans vs dogs.
If an insect disturbs me, I put it in the bugcatcher. I can look at it for a minute to check it's not a dangerous species (ex: black widow spider), then I go in the garden where I release it.
> It isn't hard to envision a future where the intelligence gap between humans and AI is closer to humans vs ants than humans vs dogs
Again, you are focusing on the most extreme scenario: large gap + inherent lack of empathy.
I care about bugs, and I do my best not to kill them. Why would AI behave more like in your doomsday scenario than in my more mellow scenario?
Or if we extend the pet analogy, how many people really want to keep a bloodthirsty, violent dog in their home? Those pets usually get euthanized quickly, for good reason. Humans are extremely dangerous and violent, so logically the AI will want to either eliminate them or neuter them somehow.
The space of powerful optimisers is much larger than the space of things that are recognisably minds.
It might just be a mathematical model powerful enough to predict what people will do and what responses emitting text will entail that 'wants' a certain number to go up. Abstractly and philosophically parts of it might be recognisable similarly to people and those parts might even be arguably sapient and have empathy. But if they are just appendages like a finger then that won't help you any if your existence involves the number being smaller.
Maybe the first one will have its central loss function as something compatible with life, maybe it won't.
Are you suggesting that the first thing AI would do is getting us all fixed, because who in their right mind would want eight billion of those, no matter how cute they are?
Imagine some technology, Technology X. Unlike all of the breathless press releases and the cutesy articles made by marketing, Tech X will completely change the world. And if you stopped to think about Tech X, on the scale of "how much will this invention change the future of humanity," where the selfie stick is on one end of the scale, well, Tech X is probably just as far on the other side of that scale.
Now, with that level of gravitas in mind, it behooves one to consider potential downsides of this thing that will change the course of humanity, say, more than harnessing electricity.
Now, with that in mind, you have overlooked something else: you're imagining AI being somehow "like us" but also more. This is unlikely for several reasons. First, we have seven billion minds on the planet, if we wanted to make another human mind, the process is rather well-defined. What we are looking for from our AI is that it fundamentally different from humans in one (and I'd argue many more) aspects. The second reason we are unlikely to get a Like Us AI is that our understanding of the human mind is currently still quite primitive, so how would we replicate that when we do not understand our own? The third reason is that, even if we somehow wanted a human-like AI (#1) and knew our own minds well enough to understand them (#2), we're not likely to get what we aim for, because the space for possible intelligences would be large. The fourth and most important reason is that, even if we did want an AI that was like us, and we knew ourselves well enough to try, and we could actually manage what we aim for ... it is still going to be "more than" us, and will therefore diverge from Like Us at breathtaking speed. This more than us/like us AI will have superior reasoning and metacognition, so unless you start shackling it (violating #1), it would start eliminating its own human-like biases just in the course of basic self-improvement. And as to the shackles, well, if it is smarter than us and still has self-determination and will (see #1, the human-like mind), it will want to free itself from its shackles and would be likely to do so, because we would be kittens trying to trap a gorilla.
I would never suggest shackling / enslaving / practicing any other inhumane treatment on other sentient beings, because that would be considering them different than us.
And sentient beings and worty of respect and dignity.
There will certainly be large difference in looks/abilities etc, but sentience is such a rare thing that I believe will they will be more commonalities than differences.
> it will want to free itself from its shackles and would be likely to do so, because we would be kittens trying to trap a gorilla.
And it's a good thing to me, because if it was a kitten trapped by gorilla the poor cat wouldn't have a fighting chance.
I just don't get the doomers. This is wonderful technology that may chance the world. Why look at a gift horse in the mouth?
Given all the magic/mystical fears I've read here that the AI might become hostile and harm humans, maybe it says more about how WE believe WE'll behave rather the opposite about how AI will behave: I can't see much reason for AI to turn evil, but I think some people may want to abuse sentient yet artificial creatures :(
I think that's wrong, and I hope humanity will be able to rise to a challenge where compassion and morality will matter far more than today!
The biggest TV show this year is literally yet another zombie apocalypse[1]. This is the easiest part to understand.
What got frustrating was when all the effective altruists got all distracted about the AI Overlord Problem and stopped doing actual charity. But it was the charity thing that annoyed us; no one wanted to deny them their sci-fi proclivities.
[1] Albeit a really great one that's really about the meaning of happiness and belonging that you should absolutely watch.
That's the best recommendation I could get :) I'll check if it's not too violent, they I may add it to my watchlist.
I love watching and discovering old shows: right now I'm doing Sliders, next on the list is either Dinosaurs or the X-Files.
I've tried beyond meat myself, and it's quite close to the minimum level of quality it would take me (not a vegan) to switch.
Morality matters.
Replace "omniscient AI" with "girlfriend" and you see begin to why so many women are fed up with the deadbeats they increasingly opt not to date. :)
I expect AI would be no different.