Processing Distortion with Peter B. Collins: Mysteries of “No Fly” List

Peter B. Collins Presents Prof. Rebecca Gordon

In 2002, Rebecca Gordon and her partner were checking for a flight at San Francisco airport, and learned that their names were on the government’s secret “No Fly” list. Gordon explains that her long history of activism and nonviolent protest had somehow led to this unexpected honor. ACLU lawyers filed suit, and won release of 300 redacted pages, leading federal judge Charles Breyer to order prosecutors to show him every page and explain the redactions—but the victims never got to see the redacted information. And they have never gotten formal notification that they’ve been cleared. Gordon comments on other cases, including men who were put on the list after refusing to become FBI informants. Gordon has written widely about US torture policies, and we also discuss those issues.

*Rebecca Gordon teaches in the Philosophy department at the University of San Francisco. She is the author of Mainstreaming Torture: Ethical Approaches in the Post-9/11 United States and the forthcoming American Nuremberg: The Officials Who Should Stand Trial for Post 9/11 War Crimes(Hot Books, 2016). Her recent article for TomDispatch is here and her website is here.

Listen to the Preview Clip Here

Listen to the full episode here (BFP Subscribers Only):

You can subscribe below to listen to this podcast, as well as all others on our site.

SUBSCRIBE

The New Drone Order – Part III_Intro: At the Advent of Winged Drones, Research Progresses Forward

Biology-inspired Engineering and Morphing Technology

Drones with wings? But why?! While some Dronesters are dwelling on the metallic, the plastic, and the 3D printed, other roboticists & researchers are harkening back to the whims of the natural world. There are birds that can maneuver like no human built aircraft can. Researchers have found that the courtship dive of the Anna's Hummingbird makes it comparatively speedier than a jet fighter at full throttle or the space shuttle re-entering the atmosphere. Anyone who's anyone has admired how frustratingly hard it is to catch a fly, much less swallow one. I once knew an old lady who swallowed a fly. It's a good thing it wasn't a drone fly, or she may have sputtered and wheezed. Perhaps she could’ve sued Lockheed Martin if she survived?

The third edition of the New Drone Order series will introduce readers to projects like the Lentink Lab at Stanford University, and other related information.

…………………………………………………………………..

To read the exclusive analysis click here (BFP Community Members Only)

Subscribe & Join BFP Activist Community here

*Read Part 2 here

The New Drone Order – Part II_Intro: Dronetopia: Lessons and Parallels from the Insect World

Drone Warfare, Propaganda, Proliferation, Mutualism, Symbiosis & Biomimicry

What can insect societies teach us about our own? Sure, we bug out from time to time, but we’re intelligent, and they’re not. Right? Well, it turns out that humans share common traits with ants, bees, and other insects. We even go to war in similar ways. This edition of the New Drone Order series will explore how drone technology fits into our world system, and question where it’s taking us, utilizing lessons from the realm of insect species to guide the topic, as well as an interview with an expert on insect societies and autonomous robotics. Propaganda, proliferation, global sales, the military industrial complex, and the concept of biomimicry will all be examined. Go on, read it, give it a chance! If you think you’re so different from insects, you’ve got ants in your pants…

…………………………………………………………………..

To read the exclusive analysis click here (BFP Community Members Only)

Subscribe & Join BFP Activist Community here

The New Drone Order is Only Beginning: Intro- All is Buzzing on the Geopolitical Front

Drone technology is moving forward, whether we like it or not. MQ-9 Reapers manufactured by General Atomics are sold to the U.S. Air Force, fitted with hellfire missiles provided by Lockheed Martin. The military industrial complex is ticking, unmanned aerial vehicles are soaring, and all is not quiet on the Western front. Few places are quiet on the Eastern hilt of the world. Drone strikes pepper Pakistan, Yemen, Somalia, Libya, and Afghanistan, as the world has become an all-access battlefield where remote-controlled homicide can be carried out with minimal effort, for the first time in human history.

Things are changing. Warfare has been altered forever. Machines are learning...how to learn. Humans are doing less of the hunting and killing and delegating these duties to tougher, colder customers. The purpose of this series is to examine players, characters and ideologies that are deeply influencing the way that our future is shaping up, in both negative and positive ways. When one drone strike kills an innocent child in a foreign village, another is used for ocean exploration and hurricane detection. We will enter into the eye of the storm of controversial issues and attempt to chart through territory that pits the right to due process against the rich vein of untapped A.I. (artificial intelligence) technology, which kicks up dirt on greedy politicians, lobbyists and arms dealers who would rather push a button than fight a war themselves. If you think the United States is winning... I'll only tell you this once. The new drone order is only just beginning, and all is buzzing on the geopolitical front.

Editor’s Note- BFP welcomes Erik Moshe to its team. Future articles in Erik’s new series will be available only to BFP activist members.

The New Drone Order: Part I- A.I. Entities, Our Future Friends or Enemies?

Steve is a scientist, entrepreneur, and a jack of many trades. He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He can be seen online contributing to a wide variety of podcasts, discussions, conferences and foundations. One of his goals is to ensure the smooth transition of autonomous robots into our lives without mucking up our own livelihoods in the process. His company Self-Aware Systems started out to help search engines search better, but gradually, he and his team built a system for reading lips and a system for controlling robots. If he ever owns a cyborg in the near future and he's able to program it himself, it will not be cold-hearted. I'm confident it would be a warm, hospitable homemaker with culinary and family therapeutic skills to boot.

"The particular angle that I started working on is systems that write their own programs. Human programmers today are a disaster. Microsoft Windows for instance crashes all the time. Computers should be much better at that task, and so we develop systems that I call self-improving artificial intelligence, so that's AI systems that understand their own operation, watch themselves work, and envision what changes to themselves might be improvements and then change themselves," Steve says.

In addition to his scientific work, Steve is passionate about human growth and transformation. He holds the vision that new technologies can help humanity create a more compassionate, peaceful, and life-serving world. He is one of the men and women behind the scenes who are doing their very best to ensure that killer robots never reach an operable level - either in perpetuity, or before we're ready to handle it as a species. His "safe AI scaffolding strategy" is one of his main proposed solutions, and a positive way forward.

You can call him an expert in the field of FAI, or friendly artificial intelligence, which is "a hypothetical artificial general intelligence that would have a positive rather than negative effect on humanity.” The term was coined by Eliezer Yudkowsky to discuss superintelligent artificial agents that reliably implement human values.

Getting an entity with artificial intelligence to do what you want is a task that researchers at the Machine Intelligence Research Institute (MIRI), in Berkeley, California are taking on. The program’s aim is to make advanced intelligent machines behave as humans intend even in the absence of immediate supervision. In other words, “take initiative, but be like us.”

Yudkowsky realized that the more important challenge was figuring out how to do that safely by getting AI to incorporate our values in their decision making. "It caused me to realize, with some dismay, that it was actually going to be technically very hard," Yudkowsky says. “Even if an AI tries to exterminate humanity,” it is “outright silly” to believe that it will “make self-justifying speeches about how humans had their time, but now, like the dinosaur, have become obsolete. Only evil Hollywood AIs do that.”

Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis say that it will be impossible to ever guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power.

Steve differs in that he is wholly optimistic about the subject. He thinks that intelligent robotics will eliminate much human drudgery and dramatically improve manufacturing and wealth creation. Intelligent biological and medical systems will improve human health and longevity, and educational systems will enhance our ability to learn and think, (pop quizzes won’t stand a chance). Intelligent financial models will improve financial stability, and legal models will improve the design and enforcement of laws for the greater good. He feels that it's a great time to be alive and involved with technology. With the safety measures he has developed, Steve hopes to merge machine with positive psychology - a division that's only a few decades old but has already given us many insights into human happiness.

Cautious attitudes in an evolving drone age

In an article on Vice’s Motherboard entitled "This Drone Has Artificial Intelligence Modelled on Honey Bee Brains", we can see firsthand how bizarre science can get, and how fast we are progressing with machine intelligence.

Launched in 2012, the Green​ Brain Project aims to create the first accurate computer model of a honey bee brain, and transplant that onto a UAV.

Researchers from the Green Brain Project—which recalls IBM’s Blue Brai​n Project to build a virtual human brain—hope that a UAV equipped with elements of a honey bee’s super-sight and smell will have applications in everything from disaster zone search and rescue missions to agriculture.

Experts, from physicist Stephen Hawking to software architect Bill Joy, warn that if artificial intelligence technology continues to be developed, it may spiral out of human control. Tesla founder Elon Musk calls artificial-intelligence development simply “summoning the demon.”

British inventor Clive Sinclair said: "Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive," he told the BBC. "It's just an inevitability."

"I am in the camp that is concerned about super intelligence," Bill Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

Are we jumping the gun with all of this talk of sentient robots triggering an apocalypse? Rodney Brooks, an Australian roboticist and founder of iRobot, thinks so. He views artificial intelligence as a tool, not a threat. In a blog post, he said:

Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time.

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch. It is going to take a lot of deep thought and hard work from thousands of scientists and engineers. And, most likely, centuries.

In an interview with The Futurist, Steve talked about the best and worst case scenarios for a fully powerful AI. He said:

I think the worst case would be an AI that takes off on its own, its own momentum, on some very narrow task and works to basically convert the world economy and whatever matter it controls to focus on that very narrow task, that it, in the process, squeezes out much of what we care most about as humans. Love, compassion art, peace, the grand visions of humanity could be lost in that bad scenario. In the best scenario, many of the problems that we have today, like hunger, diseases, the fact that people have to work at jobs that aren't necessarily fulfilling, all of those could be taken care of by machine, ushering in a new age in which people could do what people do best, and the best of human values could flourish and be embodied in this technology.

Autonomous technology for the greater human good

Steve’s primary concern has been to incorporate human values into new technologies to ensure that they have a beneficial effect. In his paper, “Autonomous Technology and the Greater Human Good”, the most downloaded academic article ever in the Journal of Experimental and Theoretical Artificial Intelligence, Steve summarized the possible consequences of a drone culture that’s moving too swiftly for its own good:

Military and economic pressures for rapid decision-making are driving the development of a wide variety of autonomous systems. The military wants systems which are more powerful than an adversary's and wants to deploy them before the adversary does. This can lead to ‘arms races’ in which systems are developed on a more rapid time schedule than might otherwise be desired.

A 2011 US Defense Department report with a roadmap for unmanned ground systems states that ‘There is an ongoing push to increase unmanned ground vehicle autonomy, with a current goal of supervised autonomy, but with an ultimate goal of full autonomy’.

Military drones have grown dramatically in importance over the past few years both for surveillance and offensive attacks. From 2004 to 2012, US drone strikes in Pakistan may have caused 3176 deaths. US law currently requires that a human be in the decision loop when a drone fires on a person, but the laws of other countries do not. There is a growing realization that drone technology is inexpensive and widely available, so we should expect escalating arms races of offensive and defensive drones. This will put pressure on designers to make the drones more autonomous so they can make decisions more rapidly.

Thoughts on Transhumanism

In an interview featured on Bullet Proof Exec, Steve briefly expressed his views on transhumanism, which is a cultural and intellectual movement that believes we can, and should, improve the human condition through the use of advanced technologies:

My worry is that we change too rapidly. I guess the question is, how do we determine what changes are like, “Yeah, this is a great improvement that’s making us better.” What are changes like, let’s say, you have the capacity or the ability to turn off conscience and to be a good CEO, well, you turn off your conscience so you could make those hard decisions. That could send humanity down into a terrible direction. How do we make those choices?

Interview with Dr. Steve Omohundro

I had the privilege of speaking with Steve, and here's what he had to say.

BFP: Thanks for taking the time to speak with us today. You have an interesting last name. If I may ask, where does it come from?

Steve: We don't know! My great grandfather wrote a huge genealogy in which he traced the name back to 1670 in Westmoreland County, Virginia. The first Omohundro came over on a ship and had dealings with Englishmen but we don't know where he came from or the origins of the name.

BFP: How have drones changed our world?

Steve: I think it's still very early days. The military uses of drones, both for surveillance and for attack, have already had a big effect. Here's an article stating that 23 countries have developed or are developing armed drones and that within 10 years they will be available to every country:

On the civilian side, agricultural applications like inspecting crops have the greatest economic value currently. They are also being used for innovative shots in movies and commercials and for surveillance. They can deliver life-saving medicine more rapidly than an ambulance can. They can rapidly bring a life saver to a drowning ocean swimmer. They are being used to monitor endangered species and to watch out for forest fires. I'm skeptical that they will be economical to use for delivery in situations which aren't time-critical, however.

BFP: Do you think artificial intelligence is possible in our lifetime?

Steve: I define an "artificial intelligence" as a computer program that can take actions to achieve desired goals. By that definition, lots of artificially intelligent systems already exist and are rapidly becoming integrated into society. Siri's speech recognition, self-driving cars, and high-frequency trading all have a level of intelligence that existed only in research systems a decade ago. These systems still don't have a human-level general understanding of the world, however. Researchers differ in when that might occur. A few believe it will be impossible but most predict it will happen sometime in the next 5 to 100 years. Beyond the ability to solve problems are human characteristics like consciousness, qualia, creativity, aesthetic sense, etc. We don't yet know exactly what these are and some people believe they cannot be automated. I think we will learn a lot about these qualities and about ourselves as we begin to interact with more intelligent computer systems.

BFP: According to a report published in March by the Association for Unmanned Vehicle Systems International, drones could create as many as 70,000 jobs and have an overall economic impact of more than $13.6 billion in three years. Which means, the report says, that each day U.S. commercial drones are grounded is a $28-million lost opportunity. If these economic projections prove to be accurate, do you see a prosperous industry on the horizon for them as well?

Steve: I believe they could have that impact but $13.6 billion is a small percentage of the GDP. The societal issues they bring up around surveillance, accidents, terrorism, etc. are much larger than that, though. For there to be a prosperous industry, the social issues need to be carefully thought through and solved.

BFP: Do you think that autonomous robot usage will spin out of control without implementation of the Safe-AI Scaffolding Strategy that you and your colleagues formulated?

Steve: Autonomous robots have the potential to be very powerful. They may be used for many beneficial applications but also could create great harm. I'm glad that many people are beginning to think carefully about their impact. I believe we should create engineering guidelines to ensure that they are safe and have a positive impact. The "Safe-AI Scaffolding Strategy" is an approach we have put forth for this but other groups have proposed alternative approaches as well. I'm hopeful that we will develop a clear understanding of how to develop these systems safely by the time that we need it.

BFP: Drones have landed on the White House lawn and in front of Angela Merkel. Where they might land next is unpredictable, but this uncertainty is a reminder that governments around the world are still trying to find their balance when it comes to an emerging technology of this scale and wide application. What positive ways do you posit that drones can affect the world, or affect the work that you are involved in?

Steve: Flying drones are just one of many new technologies that have both positive and harmful uses. Others include drone boats, self-driving vehicles, underwater drones, 3-D printing, biotechnology, nanotechnology, etc. Human society needs to develop a framework for managing these powerful technologies safely. Nuclear technology is also dual-use and has been used to both provide power and to create weapons. Fortunately, so far there hasn't been an unintended detonation of a nuclear bomb. But the recent book "Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety" tells a terrifying cautionary tale. Among many other accidents, in 1961 the U.S. Air Force inadvertently dropped two hydrogen bombs on North Carolina and 3 out of 4 of the safety switches failed.

If we can develop design-rules that ensure safety, drones and other autonomous technologies have a huge potential to improve human lives. Drones could provide a rapid response to disaster situations. Self-driving vehicles could eliminate human drudgery and prevent many accidents. Construction robots could increase productivity and dramatically lower the cost of homes and manufactured goods.

BFP: Have you read any science fiction books that expanded your perspective on A.I.? In general, what would you say got you into it?

Steve: I haven't read a lot of science fiction. Marshall Brain's "Manna: Two Views of Humanity's Future" is an insightful exploration of some of the possible impacts of these technologies. I got interested in robots as a child because my Mom thought it would be great to have a robot to do the dishes for her, and I thought that might be something I could build! I got interested in AI as a part of investigating general questions about the nature of thought and intelligence.

BFP: You recently showed me a video of a supercharged drone with advanced piloting tech that could reach speeds of 90 miles per hour, and that costs about $600. Could you see yourself going out and buying a quadcopter like that, maybe having a swarm of drones spell out "Drones for the Greater Good" in the sky? Or would you rather keep your distance from the "Tasmanian devil" drone.

Steve: I haven't been drawn to experimenting with drones myself, but I have friends who have been using them to create aerial light shows and other artistic displays. The supercharged 90 mph drone is both fascinating and terrifying. Watching the video, you clearly get the sense that controlling the use of those will be a lot more challenging than many people currently realize.

BFP: I've also seen a quadrotor with a machine gun.

Steve: Wow, that one is also quite scary. What's especially disturbing is that it doesn't appear to require huge amounts of engineering expertise to build this kind of system and yet it could obviously cause great harm. These kinds of systems will likely pose a challenge to our current social mechanisms for regulating technology.

# # # #

*Watch Steve's TEDx video from May 2012: Smart Technology for the Greater Good

Erik Moshe is BFP investigative journalist and analyst. He is an independent writer from Hollywood, Florida, and has worked as an editor of alternative news blog Media Monarchy and as an intern journalist with the History News Network. He served in the U.S. Air Force from 2009-2013. You can visit his site here.

 

Processing Distortion: John Lindh-Detainee 001 in Bush’s “war on terror”

Peter B. Collins Presents Frank Lindh

In the hysteria following 9/11, John Lindh was captured in Afghanistan and falsely accused of being a traitor. In this moving, in-depth interview, his father, Frank, details his young son’s odyssey that led to a 20-year prison sentence for “Detainee 001″ in Bush’s “war on terror”. Frank Lindh is an attorney and law school instructor and the father of 3 children.

Listen to the Preview Clip Here

Listen to the full episode here (BFP Subscribers Only):

 

SUBSCRIBE
 

This site depends exclusively on readers’ support. Please help us continue by SUBSCRIBING, and by ordering our EXCLUSIVE BFP DVDs.

Jamiol Presents

SecretCourt

Notorious “Star Chamber” Courts Protect Government Wrongdoing

U.S. Embraces Tool of Despots


starchamberRecently, Mark Hosenball dropped the bombshell that a secret panel of White House Security Council members meets to draw up a “kill list” of American militants. Salon columnist Glenn Greenwald wrote a scathing critique of the panel, comparing it to a notorious English court known as the “Star Chamber.”

“[A] panel operating out of the White House — that meets in total secrecy, with no known law or rules governing what it can do or how it operates — is empowered to place American citizens on a list to be killed by the CIA, which (by some process nobody knows) eventually makes its way to the President, who is the final Decider.  It is difficult to describe the level of warped authoritarianism necessary to cause someone to lend their support to a twisted Star Chamber like that.”

The Star Chamber, an English court dating back to the middle ages, reportedly was named for the stars on the ceiling of the courtroom, located at Westminster Palace.  Over time, it grew increasingly powerful and corrupt.  By the 17th century, under Charles I, it had become a vehicle for prosecuting political dissent.  The court’s procedures made it virtually impossible for defendants to get a fair hearing and served as a rubber stamp for the monarchy.

“Court sessions were held in secret, with no indictments, no right of appeal, no juries, and no witnesses. Evidence was presented in writing. Over time it evolved into a political weapon, a symbol of the misuse and abuse of power by the English monarchy and courts.”

The Star Chamber also punished religious dissent, ultimately driving the Puritans to seek refuge in North America’s wilderness. Their descendents would form a new nation and endow it with laws that prohibited Star Chamber abuses. Today, “star chamber” is a pejorative term used to describe any administrative body with “strict, arbitrary rulings and secretive proceedings” that “cast doubt on the legitimacy of the proceedings.”  Notwithstanding its infamous past, the Star Chamber appeals to government officials who abhor accountability.

The panel that authorized the killing of U.S. citizen Anwar al-Awlaki is the most public U.S. example of a star chamber--but it is far from the only one.  The federal government operates a network of star chamber courts administered by government agencies for the purpose of hearing appeals from military and civilian federal employees stripped of their security clearances. Many of those employees are whistleblowers who have disclosed government wrongdoing, thus implicating senior officials.  Senior officials use the star chambers to punish whistleblowers, to discredit their disclosures and to discourage other employees from exposing negligence, waste and corruption.  Existing whistleblower protection laws are helpless to protect federal employees with security clearances from agency reprisal.

Security clearance star chambers violate the U.S. Constitution’s due process protections by presidential order--Executive Order (E.O.) 12968.  These courts go by a variety of names.  The U.S. Department of Agriculture (USDA) Star Chamber is the “Personnel Security Review Board.” The Department of Defense (DOD) calls its star chamber the “Department of Hearings and Appeals.” Each federal agency interprets the executive order differently, and some—for example, USDA—actually provide less due process than E.O. 12968 allows.  All are offensive to modern notions of justice, but none have been held accountable.  Government officials argue that national security requires the suspension of due process; but, a close examination of the appeals process shows that the government’s claim is a fraud. [Read more...]

Podcast Show #43

The Boiling Frogs Presents Stephen Kohn

BFP Podcast Logo

Stephen Kohn joins us to discuss his book The Whistleblower's Handbook: A Step-by-Step Guide to Doing what’s Right and Protecting Yourself. He explains whistleblowing as a civil liberties and a First Amendment issue, the role of whistleblowers as enablers of congressional oversight, and discusses the legal and political implications involved in whistleblowing. Mr. Kohn presents us with astounding statistics and reports illustrating how whistleblowing is far more effective than regulatory authorities, and how contributions by corporate whistleblowers uncover far more fraud and corruption within their companies than all the government police and regulatory authorities combined. Stephen Kohn talks about the differences and the existing double standards between corporate and government whistleblowers, the escalation of retaliation against national security whistleblowers under the Obama administration, the travesty of justice and constitutional violations in the Bradley Manning Case, upcoming legislation in Congress to further penalize whistleblowers, and more!

stevekohnStephen M. Kohn is the Executive Director of National Whistleblowers Center, one of the nation’s foremost experts in whistleblower protection law, and the author of The Whistleblower's Handbook: A Step-by-Step Guide to Doing What's Right and Protecting Yourself, and the first legal treatise on whistleblowing, Protecting Environmental and Nuclear Whistleblowers: A Litigation Manual. Since 1984, Mr. Kohn has successfully represented whistleblowers in numerous cases (both at trial and on appeal), has testified in Congress on behalf of whistleblower reforms, and has worked directly with the staff of the Senate Judiciary Committee on drafting the Sarbanes-Oxley corporate whistleblower law. Mr. Kohn has a J.D. from Northeastern University School of Law; an M.A. in Political Science from Brown University; and a B.S. in Social Education from Boston University. In addition to his books on whistleblower law, Mr. Kohn is the author of Jailed for Peace and American Political Prisoners.

Here is our guest Stephen Kohn unplugged!

This site depends exclusively on readers’ support. Please help us continue by contributing directly and or purchasing Boiling Frogs showcased products.