The “Artificial” of Artificial Intelligence and MIT’s “Moral Machine”
At the Media Lab at the renowned Massachusetts Institute of Technology, researchers wanted to explore some of the ethical and moral dilemmas of Artificial Intelligence, focusing on self-driving cars.
Examples of headlines in the news this week included:
- Should a self-driving car kill the baby or the grandma? Depends on where you’re from (MIT’s Technology Review) 
- Driverless cars: Who should die in a crash? (BBC) 
- Self-driving cars will have to decide who should live and who should die. Here’s who humans would kill. (Washington Post) 
- Who should AI kill in a driverless car crash? It depends who you ask
- Responses vary around the world when you ask the public who an out-of-control self-driving car should hit (The Guardian) 
- Out of Two Million People, Most Prefer That a Self-Driving Car Kill the Elderly (Popular Mechanics) 
And my personal favorite:
- MIT reveals who self-driving cars should kill: The cat, the elderly, or the baby? (ZDNet)
The Technology Review reported,
The experiment presented participants with various combinations, such as whether a self-driving car should continue straight ahead to kill three elderly pedestrians or swerve into a barricade to kill three youthful passengers.
Researchers compared the results of the ethical dilemma for different cultures. For example, in China where the elderly are more respected, participants were less likely to spare the old for the young.
Let’s unpack this new field of Artificial Intelligence’s “moral dilemmas.”
The technology being pursued for self-driving cars requires the installation of an extensive network of sensors in both the automobiles and in the built environment. “Small cell antennas” which are not small, emitting untested millimeter waves, are to be hung on lamp poles, streetlights, bridges, church steeples, billboards, and wherever they can be attached, along with supporting infrastructure, with very little input or control from communities. (Because the signals travel only a short distance and are subject to “interference” from things like traffic, rain, and humans, many new antennas are needed. Self-driving cars are a major justification for the 5G rollout.)
If the engineering behind self-driving cars can result in the possibility of a careening car’s intelligence deciding whether to hit the elderly lady in the crosswalk or the baby in the carriage on the sidewalk, we need to go back to drawing board.
Careening self-driving cars are not intelligent.
More importantly, the entire field of data-driven artificial intelligence has already crossed the line in terms of deciding, in a sense, who lives and who dies.
By ignoring both the historical research concerning microwave radiation,  and emerging reports of harm from sources including towers and antennas, Wi-Fi, smart meters and cellphones in an un-quantified portion of the population, the electrical engineering community and its industrial, military, political, and economic partners have already decided that damaging health effects to certain babies to some portion of the elderly are acceptable collateral damage.
Fast Company reported:
Two years later, the researchers have analyzed 39.61 million decisions made by 2.3 million participants in 233 countries. In a new study published in Nature, they show that when it comes to how machines treat us, our sense of right and wrong is informed by the economic and cultural norms of where we live. 
Rather than examining whether or not Chinese drivers are more apt to embrace self-driving cars due to their cultural orientations, it is time for an evaluation of the ethics, or lack of ethics, driving AI, driving the self-driving car paradigm, driving MIT’s researchers and their intimate collaboration with the military industrial complex, driving 5G, and driving the IEEE.
The individuals who comprise the Institute of Electrical and Electronics Engineers are very focused on protecting jobs for electrical and electronics engineers, relishing the technological challenges of designing new ways of delivering 5G, for example via beam forming or satellites, (even if it kills the trees) or designing a foldable 5G smartphone.
What if these groups including the IEEE could get off the bandwagon of relying on outdated and inapplicable FCC guidelines and new policies designed to override health and environmental concerns, zoning, historical preservation, community choice, protection of vulnerable citizens, and common sense?
Over the next three years the City of Denver will use the $1 million grant to set up 40 solar-powered wireless air pollution sensors at schools in some of the most vulnerable neighborhoods. 
Although the intent is noble, instead of this absurd scenario where a possible carcinogen and apparent neurological health hazard is being used address asthma, what if we stopped the runaway train of manifest destiny, entitlement, squandering of resources, environmental degradation, and economic greed? Do we address asthma while causing an epidemic of neurological issues, autism, early Alzheimer’s, DNA damage, and electrical hypersensitivity?
There is no shortage of challenges for the engineers of MIT and IEEE who might actually want to collaborate respectfully with nature and recognize human biology.
We can all start by rejecting the idea that a cellphone’s influence on health can be quantified by testing it against a plastic head full of the equivalent of Jell-O, or testing Wi-Fi coverage on an airplane using bags of potatoes.
We need to stop using temperature as the only measure of microwave effects, when people are experiencing a myriad of symptoms.
We need the engineering community to stop practicing medicine without a license by dismissing health concerns as mental health issues.
We need an engineering community that recognizes that regulations established 30 years ago for anything, but especially for health, do not offer a free pass on ethics and morals.
We don’t need MIT to use its vast resources to build a “Moral Machine” to provide statistical results designed to adapt autonomous vehicle justification for various cultures. We need moral MIT students questioning the entire 5G autonomous vehicle small cell paradigm.
And, we need in informed populace that is wiling to question the energy expenditure associated with frivolous applications of wireless technologies.
We are at a crossroads regarding the science that underlies our assumptions about the frequencies bombarding the environment.
Why does this matter so much right now for you?
Because a decision will be made for you about which homes in your neighborhood will house the infrastructure supporting the roll-out of the new “5G” telecommunications network. , 
And at this point, the 5G transmitters will be placed on both the senior center and the neonatal unit, as well as the school, hospital, church, and mountaintop in the disguise of an artificial tree.
It is an inadequate “artificial intelligence” that is not intelligent, that reduces the question of health to the artificial inquiry of cellphone users and brain tumors.
It is a holistic and moral wisdom that considers the entire myriad of harm and suffering surrounding the artificial electromagnetic environment – being foisted on humanity in the name of surveillance, disguised as sustainability.
Decision makers are choosing to replace tobacco cigarettes with asbestos cigarettes – to deliver the potential for harm even more quickly and more widely. We are going nowhere fast.
We do not need Moral Machines, we need moral humans.
AI doesn’t choose in a vacuum, we all choose. We do not have to continue to stay drunk on wireless. We can get sober. The moral issues of 5G are already here. The earth cries, alas.
“If you don’t like where the future is headed, now is the time to do something about it.” Dr Kate Raynes-Goldie (Source: Phys.org)
To take action on 5G in your community: https://whatis5g.info/
Image Credit: Cartoon “Keeping Plastic Heads Safe Since 1996” by Floris Freshman
The article, "The “Artificial” of Artificial Intelligence and MIT’s “Moral Machine”", was syndicated from and first appeared at: https://www.activistpost.com/2018/11/the-artificial-of-artificial-intelligence-and-mits-moral-machine.html.
You may find more great articles by Activist Post on http://www.activistpost.com/.