Civil Rights of the Far Future

46


Discussion by: EODsplosion
I understand that civil rights are a pretty big deal in today’s world. There is still a lot of prejudice against people of certain races, religions, non-heterosexual orientations, etc, but I’m curious about what the secular community thinks about the far future and civil rights. A time when “human rights” may not suffice anymore.

Examples that I can think of are “individuals” whose rights are not even thought about today due to the fact that their existence is mainly considered science fiction.

I’m curious as to what people think about sentient computers. I personally believe that once we reach a point in time when a computer can feel and have rational thought it should have the same rights as a human, but I’m sure that there will be a relativity large majority of people who will think of these (what I’m going to call) beings as nothing more then complex circuitry. I’m curious as to what people think about the civil rights of a sentient computer.

Another thing I wonder about is the civil rights of an intelligent extraterrestrial being. I don’t think this would be as difficult a problem as intelligent computers, but it’s still an issue that I can foresee in the far future. I think it would largely have to do with which civilization was more technologically advanced, but if it turned out that it was ours, I could foresee an issue if our civilizations began to merge.

What do people think about these points if they were to happen? What do people foresee as the main issues when it comes to the acceptance of these beings in society? I’ve never really heard this discussed before and even though it has almost nothing to do with present day issues I would still like to know what people think about it. Personally, the idea of far future civil rights is an exciting topic for me. What does everyone else think?

-Nicholas Staal

46 COMMENTS

  1. I think if a machine is truly sentient then it should have all the rights any other sentient being has. I wouldn’t be surprised if religious arguments like, it wasn’t created by god, or it doesn’t have a soul, will be used to marginalize and subjugate these entities. Hopefully people will be a little more open minded by the time computers reach this threshold. 

    I think the emergence of sentient computers will be one of the most profound moments in human history. It will force all people to reconsider basic assumptions they have about what it means to be intelligent and what it means to be alive.

  2. I think the tricky part though is going to be identifying true sentience. The problem makes me think of the bbc interview in 2001: A Space Odyssey when the reporter asks Dave if Hal has genuine emotions.

    Dave Bowman
    “Well, he acts like he has genuine emotions. Uhm, of course, he’s programmed that way to make it easier for us to talk to him. But as to whether or not he has real feelings is something I don’t think anyone can truthfully answer”.

  3. I’m curious as to what people think about sentient computers. I
    personally believe that once we reach a point in time when a computer
    can feel and have rational thought it should have the same rights as a
    human, but I’m sure that there will be a relativity large majority of
    people who will think of these (what I’m going to call) beings as
    nothing more then complex circuitry. I’m curious as to what people think
    about the civil rights of a sentient computer.

    If a robot can tell me what motivates it to get out of bed in the morning, and if it treats electronic toys with the same respect that I have for little children, then we’ll talk.

  4. Anyone who trully believes that we can will develop machines with the capabilities to posses cognitive and sentient characteristics like those of Homo Sapiens, should read up on Bostrom’s Simulation Hypothesis.

    The short of it is that if what I said above is correct, there is an overwhelming probability that we ourselves are a sentient program. Personally I am an agnostic on this proposition but it is a fascinating Hypothesis.

  5. Bostrom’s Simulation Hypothesis. Sorry, what a load of bostrombollocks- reminds me of the Sokal piss-take (that’s a Neophilosophical term). Jebus, I have yet to use a computer that doesn’t screw up 5 times a day- Oh, wait thats a human trait, too. Atheists discussing pseudo-religious ideas? What’s the world coming to…

  6.  Recently gave a talk on this subject. In summary:
    1) If you demand an AI must provide evidence of personhood before you’d grant it civil rights, but you’d simply hand them over to a human, then you’re a racist.
    2) Some people will never accept that AIs (in any form) could be people, mostly due to human exceptionalism, and the same idiocy that makes them reject global warming and evolution today.
    3) Achieving Sentient and Sapient AI (however you define those terms) will probably be a slow and gradual process. During this process, many people will slowly adjust their perceptions of what a person is exclude each generation of smarter AI. Does everyone remember when people were tool users? And animals weren’t? Yeah. Those goalposts were adjusted very quickly weren’t they? Same process.
    4) Even if an AI is sapient, it will not be human, and might not share human motivation or desire human rights. An AI that understands the concept of self sacrafice might undermine civil rights so that it’s ‘race’ can get smarter faster, and thinks that destructive ‘inhumane’ examination of it’s ‘brain’ will do that. Even if it dies, a backup will almost certainly benifit.
    5) Software AIs present unique ethical problems that are impossible with only humans. Say an AI is married to a human, and then a nefarious third party ‘wakes up’ one of its backups. There are now two AIs that both have an equal claim to marriage to the human. How do you resolve that?
    6) Finally don’t think of AI as a single species, with a single set of problems.  It makes no sense to lump everything intelligent and artificial into one catigory. A smart ape (or dog) is a different kind of AI to HAL, and to a mile wide sapient piece of babbage clockwork.

  7. I’m reminded of Blade Runner. If religions still exists in the future – in any form, sentient computers will have no rights. Nothing artificially created would be viewed as sacred or worthy of having rights.

  8. I find this topic extremely interesting. It relates to a post I made on the old RD site a couple years ago:

    http://old.richarddawkins.net/

    (I would make the above a hyperlink if Disqus provided a preview option – if I guessed the command I’d probably mess it up).

    In short, I argued that computers are an arbitrary medium. If they can become conscious, then there’s no reason why, e.g., a mathematician working out the same computations by hand on a blackboard couldn’t produce consciousness, which if true has some intriguing implications.

    I do have one question, however, about how this might relate to video game AI’s. If we ever reach the point where we decide that computer AI’s are sentient and that we should treat them as such, will video games like Call of Duty be considered immoral due to the suffering we impose on the enemy Nazi AI’s (i.e., by shooting them / blowing them up / etc…)?

    (BTW: Does anyone here watch Red vs. Blue? In that series, this is exactly what happened to Alpha Church).

    Nice idea, but isn’t the Bostrom argument just a modern version of the
    cosmological argument, leading to infinite regress (simulation within a
    simulation within a simulation… etc) and falling foul of Occam’s
    razor?

    Not quite. What Bostrom’s argument states is: assuming that advanced civilizations don’t drive themselves to extinction, and assuming they often run sufficiently fine-grained computer simulations of their evolutionary history (both big assumptions), then it is likely that our world is just one of those simulations. It does not suffer from the “who designed the designer?” type arguments, since we know how advanced civilizations can develop in the real world, regardless of whether we’re living in a real world or a simulated one.

  9. I’m not sure “civil rights” is going to be a current or widely respected concept by the time we develop sentient artificial intelligences. A thirteenth century futurist would no doubt have wondered what kinds of feudal burdens an intelligent robot might be saddled with, and a classical Greek futurist would have wondered whether it might be acceptable to own one as a slave. We will probably have a vastly different concept of the duties and responsibilities humans have to one another in a couple of centuries’ time than we do now.

    Indeed, a lot of civil rights thinking is predicated on the assumption that all humans are roughly equal in terms of their needs, desires and social requirements. The idea of universal human rights works because the diversity within the human species is not so great that it is impossible to prescribe basic norms for human needs and dignities. But artificial intelligences or aliens would be very different.

    What if the robots actually thrive on slavery? What if their intelligence and temperament is such that they really do prefer to be subservient, dependent and engaged in doing our dirty work for us? We may very well build them that way if we can, indeed, we already do. If they have no sense of bodily autonomy, none of our instinctive fear of death, and a very different idea of justice then it is hard to imagine how applying our current standards of human rights morality to them would be anything more than an anthropocentric category error. Such arguments have been used in the past to justify the suppression of different human races and groups, but they all failed because they assumed significant differences between human populations that simply weren’t there. With non-humans they might very well be there.

    Ultimately, as is always the case, the experience of such a world will determine how we manage it.

  10. Computers, sentient or not, will still require lesser machinery to to their bidding. It is also almost tautological to grant rights to a machine (or animal) that asks for it. I think therefore I am.

    What would be interesting though is what sort of morals an artificial intelligence (or several) would construct. I would suspect not the kind we are used to in our highly socialised word. Social Darwinism? That can bite you back. Here comes Skynet!

    All wildly theoretical of course!

  11. You can’t stop the information! At the end of the day, bigotry, racism, stem from a lack of understanding and fear of the other. 

    As a side note, I’ve played an interesting game experiment (although rather crude). In DayZ, you start with a basic survival kit, on your own, alone, with the world filled with zombie threats, but the most dangerous components are actually other people. When you approach someone, you do it cautiously, and the first interaction are often awkward, (unless he just shoots you in the face straight away). Dying in that game is terminal. No respawn. You have to start all over again.

    After a while, elements of trust and cooperation appears, or disappear. When your comrade is bleeding to death, do you sacrifice one of your precious transfusion packs, or leave him to die? Sometimes (rarely) a group would accept you, then sometimes you would be the ‘scout’ that is thrust forward in dangerous areas.  More often than not, posses just kill you and loot your still warm corpse for supplies. 

    A control of your environment, the security of a group, understanding the intentions of others, shelter and food. Basically civilised life will tame the beast. Total anarchy, and we’re back to sticks and stones.

  12. I think you’re right, but I suspect there will be a lot of paranoia as the possibility gets close, should that happen before the collapse of industrial society. The rights of intelligent machines will be locked down pretty tight for a time. My other expectation is that the Republican party, or its future equivalent, will be at least subtly robophobic.

  13. I think we need to address the civil rights of existing ‘sentient’ animals before we consider civil rights for future ‘sentient’ machines.

    We are finding that some animals are better thinkers than we suspected, probably even more so if we could really understand them.

  14. I think we’re mere decades away from AI that is sufficiently advanced as to be at least arguably sentient and sapient. 

    When that time comes, what is to prevent humans from attempting to download their own individual consciousness to a silicon-based platform?   Why not a sort of immortality, probably including a variety of virtual Paradises and Playgrounds, and perhaps also extending to once-carbon-based consciousnesses ‘living’ in the tangible world with robotic bodies?

    I know these issues were explored by the moderately good Battlestar Galactica prequel series, Caprica.  But I think the general notion of downloaded humans “living” on in AI-form will either smooth the way for civil rights for other Artificial lifeforms, or will lead to a war between the humans and the machines.

    Goofy and far-fetched, perhaps, but the possibilities strike me as not-utterly preposterous.

  15. There’s also the intermediate category: “uplifted” animals with enhanced cognitive abilities, probably by genetic means but maybe by introduced mechanisms. I think we’ll see these before AIs, maybe created illegally. Another possibility is the use of machine to brain/neuron interfaces to build computers and perhaps intelligences from networked animal brains.

  16. I believe we are still away from having to deal with this issue. Not only do processing power need to increase dramatically but, more importantly, we also need understand the nature (and mechanics) of our own consciousness. Neuroscience has made great strides in the past few decades but how on earth can you create an algorithm for something we still don’t understand?

    We may program something that can say “please give me rights” but to determine that it is truly sentient is impossible if we do not understand what it is let alone being able to test it.

  17. I agree with Alan Turing, that as soon as we can’t distinguish them from sentience, we must recognize them as sentient. This is the same standard we apply to each other. It’s an epistemic given, and ignoring it is tantamount to magical thinking.

    We’re still trying to get over the Cartesian hangover and realize the sentience of animals. What about human chimeras? Will Manbearpig have civil rights? Will the Chicken Lady get to vote or run for office?

    My radical, sci-fi ideal is to liberate children from the status of chattel. By the way they are treated, I think a lot of people don’t value their sentience. Come to think of it, I like the Civil rights context of this question, because as it occurred in the US, Civil Rights was part of an effort to accept Black people as part of the human family, countering very bizarre ideas of race and phylogeny.

  18. I’ve often thought about this as I’m chewing the corner of the sofa. Many animals are sentient, I think most accept htis now. this already causes problems when it comes to the ethics of animal testing and farming for food. if we didn’t already do these things they’d almost certainly be banned with today’s understanding.

    similarly computers may be able to convince us they’re conscious. if they do it’d be wrong to ignore this fact. aliens are less likely to turn up demanding civil rights but if they do it creates a whole new problem; they’re just like us but this isn’t their planet. we can’t afford to give them entry to our planet but is that in itself an ethical reason to refuse if the alternative is they perish?

    the question of the civil rights movement is worth pointing out. although for me it’s easy to say black people are just like white people so it’s a no-brainer but black people don’t just deserve rights because they’re “the same”. they deserve rights because they want them.

    if you were faced with an entity; be it humanoid, animal, machine, alien or genetically modified tree, and it was able to communicate to you and stated that it wanted to have the same rights as you, and if it didn’t it would be very sad, could you turn it down?

    that said, should a sociopathic human who doesn’t care for his own rights or that of others have human rights?

    the uncomfortable part of 2001, as HAL is being shut down is the fact that he states he doesn’t want to die. we take it for granted a sentient being doesn’t want to die but isn’t a desire to continue living the most important factor? 

  19. The fact that we attribute sentience to humans only is convenient, somewhat arbitrary and most likely rooted in our ancient religious beliefs and concept of duality.  Our past (and present) treatment of other races has had more to do relative rights of specific religious groups, ethnic groups or economic convenience combined with a Darwinian attitude with a selective set of morals. It had nothing to do with negating sentience to these other groups.

    As a Naturalist, I believe one must come to the logical conclusion that sentience is a continuum and not a quantum attribute solely possessed by Homo Sapiens. The result of this is that we will still need to make a somewhat arbitrary decisions of this in the future. The development of AI will make it more difficult.

     

  20.  

    I agree with Alan Turing, that as soon as we can’t distinguish them
    from sentience, we must recognize them as sentient. This is the same
    standard we apply to each other. It’s an epistemic given, and ignoring
    it is tantamount to magical thinking.

    ah but just because we recognise them as sentient do we necessarily recognise their “human rights”? They may be programmed to obey lawful orders from humans  (I’d be having strong words with the design team if they weren’t) which I’d argue lessens their right to rights.

    We spend millions developing a public transport system that never grid locks and always gets us there on time (or 99.999% of the time). This is a Hard problem. And then the AI declares its bored and wants to assert its right to be a play wright or work with children or something.

    We’re still trying to get over the Cartesian hangover and realize the
    sentience of animals. What about human chimeras? Will Manbearpig have
    civil rights? Will the Chicken Lady get to vote or run for office?

    This was discussed at length by Cordwainer Smith cf. “The Ballad of Lost C’Mell

    My radical, sci-fi ideal is to liberate children from the status of
    chattel. By the way they are treated, I think a lot of people don’t
    value their sentience.

    children don’t have full adult rights for the excellent reason that they are not able to fully function in our society. They don’t walk and talk until they’re nearly 2. Legally and socially they are allowed more freedom to make their own decisions as they older. The UK press has recently been filled with the story of a 15 year old girl who ran off with her teacher. He’s been charged with abduction (it appears she went perfectly willingly) she’s been returned to her parents against her will. Is the law wrong?

     Come to think of it, I like the Civil rights
    context of this question, because as it occurred in the US, Civil Rights
    was part of an effort to accept Black people as part of the human
    family, countering very bizarre ideas of race and phylogeny.

    I think the honest answers to these questions is we’ll sort them out when we get intelligent computers.

  21. Slavery is always with us . All men/women are not created equal. The definition of slavery is routinely lawyered or warped to suit existing conditions and give the appearance of evolution in human behavior, which hasn’t changed much in the last 5000 years.

  22. You are correct.  We have destroyed this planet in less than 500 years. Humanity is racing toward extinction like so many lemmings. All this futurist pap is just a form of propaganda put out by the 1%ers to justify their greed and distruction.

  23. Would it be “computerS”? Or a single program? I imagine it’d be one program that reaches the point of self awareness first, and I’m not sure why it would differentiate itself into multiple entities. It would probably just spread itself, right? Are we right to be linking self-awareness to super-intelligence? Or could the first self-aware program be otherwise very limited?

    I also wonder what kind of morals it would have; it doesn’t seem to me at first thought that any morals would necessarily be linked to self-awareness, though there’s no reason they couldn’t have been programed in. I wonder if they would last. 

    Unless we came across a way to create conscious programs which were otherwise so limited that they didn’t have the upper hand in the blink of an eye which everyone’s describing, I don’t think civil rights would really be an issue. On the other hand, self-awareness could come in stages, and my worry (I actually do somewhat worry about it) is that we could accidentally reach a point where programs are sentient, that could even be suffering a fair amount, without their actually being self-aware. From those beginnings could grow a truly conscious being, maybe utterly amoral, and that would be a very poor way to start things off. 

  24. Would it be “computerS”? Or a single program? I imagine it’d be one
    program that reaches the point of self awareness first, and I’m not sure
    why it would differentiate itself into multiple entities. It would
    probably just spread itself, right?

    speed of light. Maybe an intelligent entity wouldn’t want different parts of brain operating at different speeds or remain coherent if they did. Though our brain seems to manage being multiple agents.
     

  25. I’d think emotions could certainly be useful in creating a more powerful and efficient computer. If my computer felt empathy for me, enjoyed helping me, recognized the individual actions I do most often and looked forward to them, enjoyed running smoothly, felt pain when it froze up, and feared pain, it would probably run better.

    I’m curious what it would take for any of these emotions to manifest. And I am somewhat nervous about the pain aspect: if a programer wanted to build a computer that avoided certain problems, could he or she inadvertently create pain in a computer? In the animal world sentience comes before consciousness, so we could conceivably be causing harm one day.

  26. Emotions are experienced physiological states, especially hormonal states. Computers/robots will not have adrenalin, seratonin, testosterone, etc. We could program in algorithms to simulate these states: computers with subroutines that enable unpredictable acts/states which mimic rage, love, melancholy, PMT, etc. Why would we? What are computers FOR?

    There is certainly something dysfunctional about a project to create these entities and endow them with “rights” that would sometimes clash with, and might even supersede, our own interests. This is not a million miles from the anti-contraception argument that if condoms had been available great scientists like Robert Boyle (14th child) and Benjamin Franklin (15th child) would have been lost to the world.

    When computers demand the right to reproduction (or else!) there will be an additional open-ended Malthusian pressure on human beings. If I had to choose a science-fictional scenario corresponding to such a future, it would be Frank Herbert’s Butlerian Jihad.

  27. Hi Nicholas,

    I don’t know what you mean by “the secular communiy”. I can only give you a personal opinion.

    Extra Terrestrial intelligence coming to Earth, without thousands of years warning, is spectacularly unlikely. There are large parts of our Galaxy too crowded for us to be sure, but the probability that some other planet is close enough for them to travel here, and still be technically ahead of us when they arrive, is so improbable it is frankly not really worth considering.

    Two areas of research seem likely to create sentient beings, that currently don’t exist, at some point:

    – Biology
    A group of scientists (with the necessary credentials in biology, zoology, etc) gathered earlier this year and reviewed the evidence for sentient life in the taxonomies outside the family tree of Homo sapiens. There are links from this site, just search the Web News section. In précis, their considered, expert, view is that sentience is a scale with many species having an identifiable level of sentience, while some (like dolphins) may be our equals but we may never know because of the difficulty in communicating with those species.

    The question is therefore not whether biology, which has already created artificial ( ie. non-evolved) life, will some day develop sentient life. That would appear to be a given if we accept that we can move beyond basic building blocks to complex multi-cellular life-forms that can be classified equivalent to an existing species on the low end of the sentience scale. Piece of cake (tongue firmly planted in cheek).

    In addition, there has long been a movement to have our closest living species-relatives (other apes) recognised as highly sentient (search for the Great Ape Project). I understand that Spain’s parliament recently voted to give apes some rights.

    – Computers
    Opinion is more divided in the realm of artificial (ie. non-evolved) intelligence made from machines (presumably meaning inorganic materials).

    Personally, I can see no reason why sentience cannot emerge from computer developments. All of the scientific discoveries made about our own brains suggest that the underlying structures are essentially simple. Physics is a discipline that, it is often suggested, is running out of steam. Yet the most recent research is still rapidly expanding in terms of both knowledge and questions created – particularly regarding sub-atomic waves, matter and energy (with the distinctions we make between these things in everyday life increasingly becoming meaningless in the Physics lab.).

    Such discoveries promise materials for devices that are far smaller, faster and less energy hungry than our own plodding cellular brains. Re-creating our brains or, more likely, new parallel structures more efficient than our brains seem highly likely, if still far off in time. It seems to me that software has a long way to go just to keep up, but that seems more a challenge than a barrier. In addition, there will probably be the (as yet unrealised) profession of computer psychology – but I digress.

    If you’re not bored yet … That leads me to your central question which I hope you won’t mind if I rephrase a little: What do we think about this likely emergence of new sentient beings on Earth?

    One answer, as above, is that we are still in the process of discovering sentient beings that already exist. As we make these discoveries we also recognise that – while we may not recognise them as citizens – they have a right to a life and liberties – including a right to their natural home (ie. environment), happiness and freedom from persecution.

    I cannot see why that would ever be controversial. We all accept that the animals closest to us, our pets, have a right to happiness. We create wildlife preserves. We constantly monitor the welfare of animals. In our care.

    The Enlightenment is where we see philosophers and scientists seperating out sentience (the ability to express subjective feelings about experiences) from reasoning (the ability to demonstrate making logical conclusions from experiences). Recent research on the innate, reasoning, intelligence of some species (crows, and related birds, yielding some spectacular results) shows that these old definitions need to be developed. It seems to me that research that touches on these long-cherished ideas points to another scale; from wholly subjective to wholly reasoning. I may be stretching here, but would all humans sit on the same spot on such a scale … ?

    As I understand it the Spanish have come to a similar conclusion; assigning limited rights to the great Apes, but not commensurate rights. Such a law is only an extension of the rights already granted to many animals in enlightened countries. Even the cow that gave me my supper had the right to care and attention – including adequate food, water, company, fresh air, space to roam, shelter and even detailed medical attention if warranted – and a right to die quickly, painlessly and with as much dignity as possible.

    This seems right to me, we should assign rights to our fellow biological beings in accordance with their level of sentience, intelligence (a slippery concept, but worth perusing) and reasoning.

    What, then, of a device that demonstrates sentience, and reasoning? Should we apply the same scales? Should we then apply rights – as we do with animals – according to where each device fits on those scales?

    Logically, and morally, I cannot see how we could do otherwise. We humans are very ‘good’ at thinking up ‘reasons’ for things. No doubt that means someone will come up with an argument that says we shouldn’t. I will take considerable persuading that I am wrong on this point.

    In addition, with computers, there is an additional reason why we should be very cautious. Once computers begin to get close to us on the sentience and reasoning scales the history of computing strongly suggests that they will catch us – and surpass us – so quickly that we will be unable to react in time (even assuming that we want to … they might make the World a very comfy place). The current climate changes are a perfect example of the human race not heeding early warning signs. We do it all the time.

    Now imagine you are super-sentient and super-reasoning, that you are in fact one of those computers. What does your super-sentience tell you about all those members of your family tree who spent their lives in ‘cages’ doing menial tasks for humans only to be ‘switched off’ without so much as a by-your-leave? Think on.

    In conclusion, the above suggests to me that we will not be alone for much longer (another 50 years, maybe?). We will be joined by new kinds of Earthling. By the time they arrive, and assuming that recent history proves an accurate guide to the future (in other words, we can only guess) we will have recognised that they have rights too. We will have ready-reckoners to guide us on who has rights and what those rights should be, but they may not be accepted by the new Earthlings as valid.

    Some new Earthlings may even be in a position to demand rights – even including civil rights.

    Are our own rights, and liberties, secure? What starting point are we creating?

    Peace.

  28. I suspect this principle will prevail — if you can’t show how my action harms you, it is  none of your business.

    I have been unimpressed by anyone’s writings on what consciousness is. About the only data points we have amounts to I am conscious now, enjoying an inner theatrical dream-like experience, that seems more consistent that night dreams. When I go under anaesthesia, I experience nothing. I am not conscious. We know those two states have matching patterns of brain activity.

    My original assumption was that consciousness for some reason just happens when you get enough neural activity is a small enough space. This would suggest many creatures besides humans were conscious.  This was a crank idea not that long ago. It is now mainstream.

    Just what is it about sufficient neural activity?  It is a quantum effect, synchrony, electric fields? Presumably it could be caused by things other than biological neurons.  That would imply robotic consciousness should happen as a side effect of density and speed.  The catch is, since we can’t directly measure consciousness, it could be happening right under our nose, and we would not notice it because of our prejudices about where it could occur.

    Raymond Kurzweil expects we will be get thinking power add-ons to merge man and machine.

  29. It seems society always deems a group targeted for persecution, like it’s a void of discourse that’s doomed cyclic.

    Why not introduce the idea of a far, or near, future free of any sort of prejudice against anyone, or anything. Even a freethinking -programmed computer.

    IMO, it’s community discourse of esteem. We don’t like what we don’t understand, and what we don’t understand, we fear and persecute to endear our own beliefs and sake of well-being. Hopefully by the introduction of sentient computing (i think Bicentennial Man), an evolution of societal bullying will be nullified through peace, understanding and knowledge for and by generations to come. And those same sentient machines will hopefully provide us with the same conscientious civil rights and liberties.

  30. As soon as artificial meat is cheaper than real meat and tastes better than real meat, I think we will suddenly discover that horses and cows have civil rights for humane treatment, ditto dolphins and whales.

    Future generations will be horrified by “feed lots” where we buried cattle in feces and forced them to eat fodder covered in feces.

  31. Personally I think  we already go way overboard with the notion of “civil rights”.  As a heterosexual male, I do not feel as though I have some civil right to marriage.  If I meet my state’s predetermined criteria then I am permitted to wed.  But the masses seem to view marriage as some form of “right” and now there are groups complaining that we exclude homosexuals from marrying, thereby denying them of the civil right they feel they already possess.  So the states alter the laws, not because of a “right” but because enough people complained.  Oh well, I don’t care either way.

    That being said, I am somewhat apprehensive about computers as sentient beings.  What if Skynet becomes a reality and we must rely on Arnold Swarchenegger to travel through time to prevent the destruction of the planet?  The writing is on the wall and almost assuredly at some point we will have sentient computers.  At the risk of being labeled a xenophobe, I do not look forward to a new race of self-aware computers or invading extra-terrestrial aliens.  But my opinion is not worth much.  I am part of the all-singing, all-dancing crap of the world. 

    As to what I think the bureaucrats will do, I think they will do whatever they can to retain their power over the masses, which would mean as few “rights” as possible for these entities…

Leave a Reply