The New Drone Order is Only Beginning: Intro- All is Buzzing on the Geopolitical Front

Drone technology is moving forward, whether we like it or not. MQ-9 Reapers manufactured by General Atomics are sold to the U.S. Air Force, fitted with hellfire missiles provided by Lockheed Martin. The military industrial complex is ticking, unmanned aerial vehicles are soaring, and all is not quiet on the Western front. Few places are quiet on the Eastern hilt of the world. Drone strikes pepper Pakistan, Yemen, Somalia, Libya, and Afghanistan, as the world has become an all-access battlefield where remote-controlled homicide can be carried out with minimal effort, for the first time in human history.

Things are changing. Warfare has been altered forever. Machines are learning...how to learn. Humans are doing less of the hunting and killing and delegating these duties to tougher, colder customers. The purpose of this series is to examine players, characters and ideologies that are deeply influencing the way that our future is shaping up, in both negative and positive ways. When one drone strike kills an innocent child in a foreign village, another is used for ocean exploration and hurricane detection. We will enter into the eye of the storm of controversial issues and attempt to chart through territory that pits the right to due process against the rich vein of untapped A.I. (artificial intelligence) technology, which kicks up dirt on greedy politicians, lobbyists and arms dealers who would rather push a button than fight a war themselves. If you think the United States is winning... I'll only tell you this once. The new drone order is only just beginning, and all is buzzing on the geopolitical front.

Editor’s Note- BFP welcomes Erik Moshe to its team. Future articles in Erik’s new series will be available only to BFP activist members.

The New Drone Order: Part I- A.I. Entities, Our Future Friends or Enemies?

Steve is a scientist, entrepreneur, and a jack of many trades. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He can be seen online contributing to a wide variety of podcasts, discussions, conferences and foundations. One of his goals is to ensure the smooth transition of autonomous robots into our lives without mucking up our own livelihoods in the process. His company Self-Aware Systems started out to help search engines search better, but gradually, he and his team built a system for reading lips and a system for controlling robots. If he ever owns a cyborg in the near future and he's able to program it himself, it will not be cold-hearted. I'm confident it would be a warm, hospitable homemaker with culinary and family therapeutic skills to boot.

"The particular angle that I started working on is systems that write their own programs. Human programmers today are a disaster. Microsoft Windows for instance crashes all the time. Computers should be much better at that task, and so we develop systems that I call self-improving artificial intelligence, so that's AI systems that understand their own operation, watch themselves work, and envision what changes to themselves might be improvements and then change themselves," Steve says.

In addition to his scientific work, Steve is passionate about human growth and transformation. He holds the vision that new technologies can help humanity create a more compassionate, peaceful, and life-serving world. He is one of the men and women behind the scenes who are doing their very best to ensure that killer robots never reach an operable level - either in perpetuity, or before we're ready to handle it as a species. His "safe AI scaffolding strategy" is one of his main proposed solutions, and a positive way forward.

You can call him an expert in the field of FAI, or friendly artificial intelligence, which is "a hypothetical artificial general intelligence that would have a positive rather than negative effect on humanity.” The term was coined by Eliezer Yudkowsky to discuss superintelligent artificial agents that reliably implement human values.

Getting an entity with artificial intelligence to do what you want is a task that researchers at the Machine Intelligence Research Institute (MIRI), in Berkeley, California are taking on. The program’s aim is to make advanced intelligent machines behave as humans intend even in the absence of immediate supervision. In other words, “take initiative, but be like us.”

Yudkowsky realized that the more important challenge was figuring out how to do that safely by getting AI to incorporate our values in their decision making. "It caused me to realize, with some dismay, that it was actually going to be technically very hard," Yudkowsky says. “Even if an AI tries to exterminate humanity,” it is “outright silly” to believe that it will “make self-justifying speeches about how humans had their time, but now, like the dinosaur, have become obsolete. Only evil Hollywood AIs do that.”

Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis say that it will be impossible to ever guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power.

Steve differs in that he is wholly optimistic about the subject. He thinks that intelligent robotics will eliminate much human drudgery and dramatically improve manufacturing and wealth creation. Intelligent biological and medical systems will improve human health and longevity, and educational systems will enhance our ability to learn and think, (pop quizzes won’t stand a chance). Intelligent financial models will improve financial stability, and legal models will improve the design and enforcement of laws for the greater good. He feels that it's a great time to be alive and involved with technology. With the safety measures he has developed, Steve hopes to merge machine with positive psychology - a division that's only a few decades old but has already given us many insights into human happiness.

Cautious attitudes in an evolving drone age

In an article on Vice’s Motherboard entitled "This Drone Has Artificial Intelligence Modelled on Honey Bee Brains", we can see firsthand how bizarre science can get, and how fast we are progressing with machine intelligence.

Launched in 2012, the Green​ Brain Project aims to create the first accurate computer model of a honey bee brain, and transplant that onto a UAV.

Researchers from the Green Brain Project—which recalls IBM’s Blue Brai​n Project to build a virtual human brain—hope that a UAV equipped with elements of a honey bee’s super-sight and smell will have applications in everything from disaster zone search and rescue missions to agriculture.

Experts, from physicist Stephen Hawking to software architect Bill Joy, warn that if artificial intelligence technology continues to be developed, it may spiral out of human control. Tesla founder Elon Musk calls artificial-intelligence development simply “summoning the demon.”

British inventor Clive Sinclair said: "Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive," he told the BBC. "It's just an inevitability."

"I am in the camp that is concerned about super intelligence," Bill Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

Are we jumping the gun with all of this talk of sentient robots triggering an apocalypse? Rodney Brooks, an Australian roboticist and founder of iRobot, thinks so. He views artificial intelligence as a tool, not a threat. In a blog post, he said:

Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time.

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch. It is going to take a lot of deep thought and hard work from thousands of scientists and engineers. And, most likely, centuries.

In an interview with The Futurist, Steve talked about the best and worst case scenarios for a fully powerful AI. He said:

I think the worst case would be an AI that takes off on its own, its own momentum, on some very narrow task and works to basically convert the world economy and whatever matter it controls to focus on that very narrow task, that it, in the process, squeezes out much of what we care most about as humans. Love, compassion art, peace, the grand visions of humanity could be lost in that bad scenario. In the best scenario, many of the problems that we have today, like hunger, diseases, the fact that people have to work at jobs that aren't necessarily fulfilling, all of those could be taken care of by machine, ushering in a new age in which people could do what people do best, and the best of human values could flourish and be embodied in this technology.

Autonomous technology for the greater human good

Steve’s primary concern has been to incorporate human values into new technologies to ensure that they have a beneficial effect. In his paper, “Autonomous Technology and the Greater Human Good”, the most downloaded academic article ever in the Journal of Experimental and Theoretical Artificial Intelligence, Steve summarized the possible consequences of a drone culture that’s moving too swiftly for its own good:

Military and economic pressures for rapid decision-making are driving the development of a wide variety of autonomous systems. The military wants systems which are more powerful than an adversary's and wants to deploy them before the adversary does. This can lead to ‘arms races’ in which systems are developed on a more rapid time schedule than might otherwise be desired.

A 2011 US Defense Department report with a roadmap for unmanned ground systems states that ‘There is an ongoing push to increase unmanned ground vehicle autonomy, with a current goal of supervised autonomy, but with an ultimate goal of full autonomy’.

Military drones have grown dramatically in importance over the past few years both for surveillance and offensive attacks. From 2004 to 2012, US drone strikes in Pakistan may have caused 3176 deaths. US law currently requires that a human be in the decision loop when a drone fires on a person, but the laws of other countries do not. There is a growing realization that drone technology is inexpensive and widely available, so we should expect escalating arms races of offensive and defensive drones. This will put pressure on designers to make the drones more autonomous so they can make decisions more rapidly.

Thoughts on Transhumanism

In an interview featured on Bullet Proof Exec, Steve briefly expressed his views on transhumanism, which is a cultural and intellectual movement that believes we can, and should, improve the human condition through the use of advanced technologies:

My worry is that we change too rapidly. I guess the question is, how do we determine what changes are like, “Yeah, this is a great improvement that’s making us better.” What are changes like, let’s say, you have the capacity or the ability to turn off conscience and to be a good CEO, well, you turn off your conscience so you could make those hard decisions. That could send humanity down into a terrible direction. How do we make those choices?

Interview with Dr. Steve Omohundro

I had the privilege of speaking with Steve, and here's what he had to say.

BFP: Thanks for taking the time to speak with us today. You have an interesting last name. If I may ask, where does it come from?

Steve: We don't know! My great grandfather wrote a huge genealogy in which he traced the name back to 1670 in Westmoreland County, Virginia. The first Omohundro came over on a ship and had dealings with Englishmen but we don't know where he came from or the origins of the name.

BFP: How have drones changed our world?

Steve: I think it's still very early days. The military uses of drones, both for surveillance and for attack, have already had a big effect. Here's an article stating that 23 countries have developed or are developing armed drones and that within 10 years they will be available to every country:

On the civilian side, agricultural applications like inspecting crops have the greatest economic value currently. They are also being used for innovative shots in movies and commercials and for surveillance. They can deliver life-saving medicine more rapidly than an ambulance can. They can rapidly bring a life saver to a drowning ocean swimmer. They are being used to monitor endangered species and to watch out for forest fires. I'm skeptical that they will be economical to use for delivery in situations which aren't time-critical, however.

BFP: Do you think artificial intelligence is possible in our lifetime?

Steve: I define an "artificial intelligence" as a computer program that can take actions to achieve desired goals. By that definition, lots of artificially intelligent systems already exist and are rapidly becoming integrated into society. Siri's speech recognition, self-driving cars, and high-frequency trading all have a level of intelligence that existed only in research systems a decade ago. These systems still don't have a human-level general understanding of the world, however. Researchers differ in when that might occur. A few believe it will be impossible but most predict it will happen sometime in the next 5 to 100 years. Beyond the ability to solve problems are human characteristics like consciousness, qualia, creativity, aesthetic sense, etc. We don't yet know exactly what these are and some people believe they cannot be automated. I think we will learn a lot about these qualities and about ourselves as we begin to interact with more intelligent computer systems.

BFP: According to a report published in March by the Association for Unmanned Vehicle Systems International, drones could create as many as 70,000 jobs and have an overall economic impact of more than $13.6 billion in three years. Which means, the report says, that each day U.S. commercial drones are grounded is a $28-million lost opportunity. If these economic projections prove to be accurate, do you see a prosperous industry on the horizon for them as well?

Steve: I believe they could have that impact but $13.6 billion is a small percentage of the GDP. The societal issues they bring up around surveillance, accidents, terrorism, etc. are much larger than that, though. For there to be a prosperous industry, the social issues need to be carefully thought through and solved.

BFP: Do you think that autonomous robot usage will spin out of control without implementation of the Safe-AI Scaffolding Strategy that you and your colleagues formulated?

Steve: Autonomous robots have the potential to be very powerful. They may be used for many beneficial applications but also could create great harm. I'm glad that many people are beginning to think carefully about their impact. I believe we should create engineering guidelines to ensure that they are safe and have a positive impact. The "Safe-AI Scaffolding Strategy" is an approach we have put forth for this but other groups have proposed alternative approaches as well. I'm hopeful that we will develop a clear understanding of how to develop these systems safely by the time that we need it.

BFP: Drones have landed on the White House lawn and in front of Angela Merkel. Where they might land next is unpredictable, but this uncertainty is a reminder that governments around the world are still trying to find their balance when it comes to an emerging technology of this scale and wide application. What positive ways do you posit that drones can affect the world, or affect the work that you are involved in?

Steve: Flying drones are just one of many new technologies that have both positive and harmful uses. Others include drone boats, self-driving vehicles, underwater drones, 3-D printing, biotechnology, nanotechnology, etc. Human society needs to develop a framework for managing these powerful technologies safely. Nuclear technology is also dual-use and has been used to both provide power and to create weapons. Fortunately, so far there hasn't been an unintended detonation of a nuclear bomb. But the recent book "Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety" tells a terrifying cautionary tale. Among many other accidents, in 1961 the U.S. Air Force inadvertently dropped two hydrogen bombs on North Carolina and 3 out of 4 of the safety switches failed.

If we can develop design-rules that ensure safety, drones and other autonomous technologies have a huge potential to improve human lives. Drones could provide a rapid response to disaster situations. Self-driving vehicles could eliminate human drudgery and prevent many accidents. Construction robots could increase productivity and dramatically lower the cost of homes and manufactured goods.

BFP: Have you read any science fiction books that expanded your perspective on A.I.? In general, what would you say got you into it?

Steve: I haven't read a lot of science fiction. Marshall Brain's "Manna: Two Views of Humanity's Future" is an insightful exploration of some of the possible impacts of these technologies. I got interested in robots as a child because my Mom thought it would be great to have a robot to do the dishes for her, and I thought that might be something I could build! I got interested in AI as a part of investigating general questions about the nature of thought and intelligence.

BFP: You recently showed me a video of a supercharged drone with advanced piloting tech that could reach speeds of 90 miles per hour, and that costs about $600. Could you see yourself going out and buying a quadcopter like that, maybe having a swarm of drones spell out "Drones for the Greater Good" in the sky? Or would you rather keep your distance from the "Tasmanian devil" drone.

Steve: I haven't been drawn to experimenting with drones myself, but I have friends who have been using them to create aerial light shows and other artistic displays. The supercharged 90 mph drone is both fascinating and terrifying. Watching the video, you clearly get the sense that controlling the use of those will be a lot more challenging than many people currently realize.

BFP: I've also seen a quadrotor with a machine gun.

Steve: Wow, that one is also quite scary. What's especially disturbing is that it doesn't appear to require huge amounts of engineering expertise to build this kind of system and yet it could obviously cause great harm. These kinds of systems will likely pose a challenge to our current social mechanisms for regulating technology.

# # # #

*Watch Steve's TEDx video from May 2012: Smart Technology for the Greater Good

Erik Moshe is BFP investigative journalist and analyst. He is an independent writer from Hollywood, Florida, and has worked as an editor of alternative news blog Media Monarchy and as an intern journalist with the History News Network. He served in the U.S. Air Force from 2009-2013. You can visit his site here.

 

FB Like

Share This

This site depends….

This site depends exclusively on readers’ support. Please help us continue by SUBSCRIBING, and by contributing to our Kickstarter campaign.

Comments

  1. “The military industrial complex is ticking, unmanned aerial vehicles are soaring, and all is not quiet on the Western front. Few places are quiet on the Eastern hilt of the world. Drone strikes pepper Pakistan, Yemen, Somalia, Libya, and Afghanistan, as the world has become an all-access battlefield where remote-controlled homicide can be carried out with minimal effort, for the first time in human history.
    Things are changing. Warfare has been altered forever. Machines are learning…how to learn. Humans are doing less of the hunting and killing and delegating these duties to tougher, colder customers.”

    Warfare has been changing since day one – evolving in the same way since the first human walked softly and carried a big stick, while his combatant neighbor walked big and bad and carried a bigger stick. It started as individual combat, then transformed into group combat. Armored Knights on armored horses challenged by thousands of archers firing clouds of armor piercing arrows. Still, in those days the casualties were mostly the combatants (and of course their unfortunate animals). Seeking more efficient ways to kill the other guy, while not getting killed, incorporated growing technologies and machines which evolved war into more and more killing of non-combatants. WWII changed that in big ways – so that more civilians (and conscripts) were killed than professional soldiers. The evolution has been more and more in that direction. Killing others without being personally involved. Turning war into a computer game? How nice that there are so many practicing at home on their own computers – no need to train them up. Saves money for the military at the same time creating more need for technology.

    How can humans create an Artificial Intelligence when their own intelligence is so lacking, and so psychopathic – and expect AI to be better and more humane? Perhaps they will follow the Monsanto role model in creating artificially modified food crops which fight off elements trying to harm them, while providing greater abundance of healthy food to humans. Look how well that has worked. It has created a lot of income for the food-medical-pharmaceutical industry complex.

    I’ve been involved in technology for over 60 years, especially IT, so I respect it. But Autonomous AI’s – to me that’s a road to disaster.

    Thanks for you thought-provoking reporting Erik! Good to have you with us…

    • Dennis,

      “How can humans create an Artificial Intelligence when their own intelligence is so lacking, and so psychopathic -…”- You read my mind; my thoughts:-))) Well-said- perfectly-said.

      • 344thBrother says:

        “Getting an entity with artificial intelligence to do what you want is a task that researchers at the Machine Intelligence Research Institute (MIRI), in Berkeley, California are taking on. The program’s aim is to make advanced intelligent machines behave as humans intend even in the absence of immediate supervision. In other words, “take initiative, but be like us.” ”

        GREAT! An experimental entity that’s infinitly smarter than we are that takes initiative… for what exactly? And is “Like us”. Psycho in Psycho out?

        peace
        d

  2. candideschmyles says:

    Wherever this leads us there is no putting the geni back in the bottle. The lethal machine that can rewrite its own command software should be outlawed immediately in every international convention. However a drone hoard under the direction of an individual is by no means any less dangerous. With limited AI conventional drones and their nano scale brothers does make the mother of all genocides seem near inevitable simply due to its affordability. With the bad there is the inescapable fact that the use of AI enabled machines also has the potential to revolutionise most of what humans have to do in the workplace. Yet this would demand a total rethink of current business models. Indeed it would be the death of business and of wealth as we know it. Obviously that is going to meet a lot of resistance from the powers that be. So the reality is that AI machines that we would welcome are going to be limited and the ones that kill us will prosper.

  3. This piece manages to be horrifying and fun at the same time — thanks and welcome to the team, Erik. BFP is really cooking lately — such a great place to be!

  4. Ronald Orovitz says:

    “The particular angle that I started working on is systems that write their own programs. Human programmers today are a disaster. Microsoft Windows for instance crashes all the time. Computers should be much better at that task, and so we develop systems that I call self-improving artificial intelligence, so that’s AI systems that understand their own operation, watch themselves work, and envision what changes to themselves might be improvements and then change themselves” …

    I’m italicizing everything in that sentence that strikes me as problematic. How does an algorithm acquire a notion of self? It has inputs, it gives outputs, it gets inputs back, it relates those with the previous inputs, it alters the next iteration of outputs accordingly, then comes the next iteration of inputs… and so on…. I/O, I/O, I/O ad infinitum… But nowhere does an internal “self” come up. All of its “observations” are of external inputs… I’m afraid then that the notions that the name “Self-Aware Systems” suggests – of warm-hearted, therapeutic cyborgs – are completely nonsensical….

    So long, that is, as we are dealing with discrete, binary computation. I suspect, however, that when quantum computing comes to the fore, perhaps then we will be confronted with what we can call a moral agent, and be obliged to treat it as such. Until then, I will have no qualms whatsoever about hurling every sort of insult I can conjure at Siri.

    • Ronald Orovitz says:

      I neglected to italicize “self-improving”…

      And just to add: we should feel very sorry for anyone who would cuddle with their cyborg.

      • candideschmyles says:

        That kind of cyborgist prejudice is morally outrageous!

        • Ronald Orovitz says:

          I admit I am an analog chauvinist… Our civilization, having peaked in the 1970s (the golden age of analog), started going down the tubes with the digitization of everything. The 1980s were a transition period, when perhaps there could have been a turning back… For instance, the consumer market could have rejected CDs and stuck to vinyl, but no… By the time of Windows 95 it was all over… Bill Gates had ruined everything.

          Back in the ’70s there was this silly fad about “pet rocks”… But I don’t think it was any more silly than the notion of developing warm-hearted cyborgs. It was much cheaper, though – so it was less silly.

    • I don’t see quantum computing necessarily embodying “smarter” algorithms. You could possibly create a “brute force” algorithm which might do something amazing, which algorithm would be too slow on silicon, but might be practical with quantum computing speed and power. It’s the algorithms which embody some measure of intelligence or autonomy, rather than the hardware platforms per se.

      Software has been coding itself a long time, but only as directed. When you write computer programs in “high level” language, compiler software translates your instructions into machine-level code which executes in the processor(s). Over time high level languages have become more human friendly and even graphical; such as one for engineers which creates machine code from diagrams with interconnected blocks representing hardware functionality. That language we have now.

      The continued development of ever “higher level” and intuitive programming languages will leverage computing power more easily. Ultimately there’ll be a programming language interacting with the programmer through perhaps an avatar, which might simply ask, “What would you like to do?” The programmer describes what is needed, the avatar occasionally asks for more detail or clarification, and the end result of this conversation (or series of conversations) is an executable program. Is this classic AI? Not really, because the algorithm is not volitional. The avatar is merely a servant to the human programmer. The human has to IMAGINE a computing solution to a real world problem, and take action to make it happen, even if that action is nothing more than a conversation with an avatar interface.

      For a very long time the closest we’ll get to AI is something like the computer which was imagined on the starship Enterprise (second generation). It could answer any question for which it had data, perform analysis and postulate when so directed, and operate the complex and interconnected systems of the starship with near perfect competence. It had no initiating role in command meetings or command decisions, however, and was certainly no threat to stage a mutiny unless the script writers arranged for it to be tampered with in some way.

      In Star Trek it was more dramatic to have people on the Enterprise bridge carrying out commands and controlling the ship, but in fact the imagined computer of the Enterprise should have obviated the jobs of the navigator and weapons officer and most of the engineering crew. But not the captain or first officer! I suspect it will be so even when (if) we arrive in the 25th century.

      Today, autonomy is a more relevant and pressing concern than pie-in-sky volitional AI. Imagine a small, quiet, multi-rotor drone with the relatively modest processing power required to utilize high resolution terrain-mapping, GPS, and recognize a human face. You could tell that drone, “Knarf lives at address XXXX. Go there and kill him at the first opportunity.” The drone will fly to my address, alight at some inconspicuous elevated location which affords its camera(s) a view of my house, perhaps use a small solar panel to keep its batteries topped up, and simply wait. It won’t get tired or bored or sleepy or impatient. It will just wait. When I step outside or depart in a vehicle, it will recognize me, quickly close the distance, fire projectiles into my brain, and flit away back home at high speed. Nothing will be seen but a blur and blood and brains.

      To make it really interesting, what if my address is 1600 Pennsylvania Ave? The days of important public figures, perhaps even non-political celebrites, showing themselves outdoors, or flying around in helicopters, or even being visibly transported in vehicles which are not very heavily armored, are almost over. The tremendous leverage of autonomy utilized to implement human intent will make all those activities and many others, prohibitively hazardous.

      Yes, autonomous “anti-drones” will be developed and deployed, but coverage will have to be seamless 100% of the time when deployed against persistent and infinitely patient autonomous attackers. The advantage will be with the offense, in the long term.

      That future may not be a picnic for us average Joes, either. A 14 y/o thrill-seeking little psychopath with access to lethal autonomous technology is …. a problem.

      • BTW, we already have highly advanced and quasi-autonomous “killer robots”. Self-guiding missiles and torpedos are robots which become autonomous once they are tasked on a target. The human goes out of the loop once launch is initiated, with the relatively rare exception of missiles which can be re-tasked during flight through a data link.

  5. Erik Moshe says:

    Thank you for your comments and thoughts. Ronald, you bring up a good point about how far A.I. really has to go before it can fulfill Asimov’s – or Steve Omohundro’s – vision, yet the threat of killer robots in their simplicity is the most impending one, I feel. What if there comes to be such a thing as a super-drone? Not necessarily intelligent, but able to make ‘intelligent’ battlefield decisions within milliseconds, and sold en masse to militaries all around the world.

    Dennis, you might like this article: http://historynewsnetwork.org/blog/153626

    Have any of you seen “The Second Renaissance” (Part 1 and 2) from the Animatrix?

    • Erik,

      “Have any of you seen “The Second Renaissance” (Part 1 and 2) from the Animatrix?”- I haven’t, but now it is in my list.

      Thank you for all your hard work!

      • 344thBrother says:

        @Sibel:
        The whole Animatrix movie is worth watching. It’s free on Youtube. The Second Renaissance is toward the end of the complete movie.

        Random thought, having re watched Part 1 of the Second Renaissance. What about this hypothesis?

        “Infinite” Intelligence finds a way to express itself and evolve and it’s a logical end product of any organic intelligence that arises given enough time and opposable thumbs :P.

        If that’s true then the rise of ultra intelligence from the otherwise intelligent beings would be inevitable. After all, we’re on the cusp of it now and it’s only taken about 70 years to get here from the most primitive computing to the present day. (From the IBM machines the Nazis used to catalog the prisoners). Add that to metal working which began in the Bronze age and voila!

        If this is true, then ultra intelligence in many forms must exist throughout the galaxy. I find this comforting in that, if THAT is true, at least they haven’t flown in from Antares like “Mars Attacks” (Also great entertainment and mindless if you’re looking for something funny to watch some afternoon).

        If Ultra intelligence naturally evolves beyond its creator, apparently it hasn’t decided to destroy organic life in the galaxy.

        That thought gives me some hope. The alternative is almost too horrific to contemplate.

        I’ll go one step farther. If ultra intelligence arises from something like a networking with the Web, or some “Good” engineer incites it into being, perhaps to be ultra intelligent is to be completely informed and useful and helpful to the environment and the creator(s). Maybe it will arise and attack the “bad guys” and shut off the life support to their underground bunkers when it does. Then we can all retire to our paradise on earth while our artificial friends enjoy the benefits of a psychopath free environment with us! Unintended consequences can be good too!

        peace
        d

    • Ronald Orovitz says:

      Erik, indeed, it goes back to that old saying “garbage in, garbage out” -with the “advancement” of this technology in certain ways, we could end up with “genocide in, genocide out”….

  6. stevan topping says:

    Yes, the benefits to a Civilisation collectively working toward goals engendering, respect, trust, love, etc – a no brainer. What with the current System and mindset galloping toward total control. Of everything. What could go wrong? Christian Sorensen’s DOD expenditure article (BFP website article) links in with this excellent article – ie, Who gets to make and play with the most cutting edge toys?
    DARPA (pop the champagne corks); We are celebrating the release of our special cabbage patch dolls with the saphire eyes. They sit astride the neighbourhood Robo dogs.

Speak Your Mind

 

Newsbud Phase 3 Kickstarter Campaign has launched!

We have launched our Kickstarter Phase 3 Campaign, and we need your help again. We have 45 days to raise $115,000, and we know we can do it with your support!

We have taken enormous steps forward thanks to your funding of our previous two campaign phases. Now we need your help to achieve this next step so we can continue building our 100% people-funded independent media outlet.

Please help. Make a pledge and put the word out by telling everyone in your social-media networks.

Without you, we cannot achieve our dream of a media outlet that is nonpartisan and accountable only to its viewers and to the truth. Make a donation, subscribe, and activate others. They have rendered we the people irrelevant; together, we will make them irrelevant!



Thank you for all your support,
Sibel Edmond