Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

24 April 2022

The Dark Future of Freedom

by Emile Wolfaardt

Is freedom really our best option as we build a future enhanced by digital prompts, limits, and controls?

We have already surrendered many of our personal freedoms for the sake of safety – and yet we are just on the brink of a general transition to a society totally governed by instrumentation. Stop! Please read that sentence again! 

Consider for example how vehicles unlock automatically as authorised owners approach them, warn drivers when their driving is erratic, alter the braking system for the sake of safety and resist switching lanes unless the indicator is on. We are rapidly moving to a place where vehicles will not start if the driver has more alcohol in their system than is allowed, or if the license has expired or the monthly payments fall into arrears.

There is a proposal in the European Union to equip all new cars with a system that will monitor where people drive, when and above all, at what speed. The date will be transmitted in real time to the authorities.

Our surrender of freedoms, however, has advantages. Cell-phones alert us if those with contagions are close to us, and Artificial Intelligence (AI) and smart algorithms now land our aeroplanes and park our cars. When it comes to driving, AI has a far better track record than humans. In a recent study, Google claimed that its autonomous cars were ‘10x safer than the best drivers,’ and ‘40x safer than teenagers.’ AI promises, reasonably, to provide health protection and disease detection. Today, hospitals are using solutions based on Machine Learning and Artificial Intelligence to read scans. Researchers from Stanford developed an algorithm to assess chest X-rays for signs of disease. This algorithm can recognise up to fourteen types of medical condition – and was better at diagnosing pneumonia than several expert radiologists working together.

Not only that, but AI promises to both reduce human error and intervene in criminal behavior. PredPol is a US based company that uses Big Data and Machine Learning to predict the time and place of a potential offence. The software looks at existing data on past crimes and predicts when and where the next crime is most likely to happen – and has demonstrated a 7.4% reduction in crime across cities in the US and created a new avenue of study in Predictive Policing. It already knows the type of person who is likely to commit the crime and tracks their movement toward the place of anticipated criminal behavior.

Here is the challenge – this shift to AI, or ‘instrumentation’ as it is commonly called, has been both obfuscatious and ubiquitous. And here are the two big questions about this colossal shift that nobody is talking about.

Firstly, the entire move to the instrumentation of society is predicated on the wholesale surrender of personal data. Phone, watches, GPS systems, voicemails, e-mails, texts, online tracking, transactions records, and countless other instruments capture data about us all the time. This data is used to analyse, predict, influence, and control our behaviour. In the absence of any governing laws or regulation, the Googles, Amazons, and Facebooks of the world have obfuscated the fact that they collect hundreds of billions of bits of personal data every minute – including where you go, when you sleep, what you look at on your watch or phone or other device, which neighbour you speak to across the fence, how your pulse increases when you listen to a particular song, how many exclamation marks you put in your texts, etc. and they collect your data whether or not you want or allow them to.

Opting out is nothing more than donning the Emperor’s new clothes. Your personal data is collated and interpreted, and then sold on a massive scale to companies without your permission or remuneration. Not only are Google, Amazon and Facebook (etc.) marketing products to you, but they are altering you, based on their knowledge of you, to purchase the products they want you to purchase. Perhaps they know a user has a particular love for animals, and that she bought a Labrador after seeing it in the window of a pet store. She has fond memories of sitting in her living room talking to her Lab while ‘How Much is that Doggy in the Window’ played in the background. She then lost her beautiful Labrador to cancer. And would you know it – an ad ‘catches her attention’ on her phone or her Facebook feed with a Labrador just like hers, with a familiar voice singing a familiar song taking her back to her warm memories, and then the ad turns to collecting money for Canine Cancer. This is known as active priming.

According to Google, an elderly couple recently were caught in a life-threatening emergency and needed to get to the doctor urgently. They headed to the garage and climbed into their car – but because they were late on their payments, AI shut their car down – it would not start. We have moved from active priming into invasive control.

Secondly, data harvesting has become so essential to the business model that it is already past the point of reversal. It is ubiquitous. When challenged about this by the US House recently, Mark Zuckerberg offered that Facebook would be more conscientious about regulating themselves. The fox offered to guard the henhouse. Because this transition was both hidden and wholesale, by the time lawmakers started to see the trend it was too late. And too many Zuckerbucks had been ingested by the political system. The collaboration of big data has become irreversible – and now practically defies regulation.

We have transitioned from the Industrial Age where products were developed to ease our lives, to the Age of Capitalism where marketing is focused on attracting our attention by appealing to our innate desire to avoid pain or attract pleasure. We are now in what is defined as the Age of Surveillance Capitalism. In this sinister market we are being surveilled and adjusted to buy what AI tells us to buy. While it used to be true that ‘if the service is free, you are the product,’ it is now more accurately said that ‘if the service is free, you are the carcass ravaged of all of your personal data and freedom to choose.’ You are no longer the product, your data is the product, and you are simply the nameless carrier that funnels the data.

And all of this is marketed under the reasonable promise of a more cohesive and confluent society where poverty, disease, crime and human error is minimised, and a Global Base Income is being promised to everyone. We are told we are now safer than in a world where criminals have the freedom to act at will, dictators can obliterate their opponents, and human errors cost tens of millions of lives every year. Human behaviour is regulated and checked when necessary, disease is identified and cured before it ever proliferates, and resources are protected and maximised for the common betterment. We are now only free to act in conformity with the common good.

This is the dark future of freedom we are already committed to – albeit unknowingly. The only question remaining is this – whose common good are we free to act in conformity with? We may have come so far in the subtle and ubiquitous loss of our freedoms, but it may not be too late to take back control. We need to self-educate, stand together, and push back against the wholesale surrender of our freedom without our awareness.

17 November 2019

Getting the Ethics Right: Life and Death Decisions by Self-Driving Cars

Yes, the ethics of driverless cars are complicated.
Image credit: Iyad Rahwan
Posted by Keith Tidman

In 1967, the British philosopher Philippa Foot, daughter of a British Army major and sometime flatmate of the novelist Iris Murdoch,  published an iconic thought experiment illustrating what forever after would be known as ‘the trolley problem’. These are problems that probe our intuitions about whether it is permissible to kill one person to save many.

The issue has intrigued ethicists, sociologists, psychologists, neuroscientists, legal experts, anthropologists, and technologists alike, with recent discussions highlighting its potential relevance to future robots, drones, and self-driving cars, among other ‘smart’, increasingly autonomous technologies.

The classic version of the thought experiment goes along these lines: The driver of a runaway trolley (tram) sees that five people are ahead, working on the main track. He knows that the trolley, if left to continue straight ahead, will kill the five workers. However, the driver spots a side track, where he can choose to redirect the trolley. The catch is that a single worker is toiling on that side track, who will be killed if the driver redirects the trolley. The ethical conundrum is whether the driver should allow the trolley to stay the course and kill the five workers, or alternatively redirect the trolley and kill the single worker.

Many twists on the thought experiment have been explored. One, introduced by the American philosopher Judith Thomson a decade after Foot, involves an observer, aware of the runaway trolley, who sees a person on a bridge above the track. The observer knows that if he pushes the person onto the track, the person’s body will stop the trolley, though killing him. The ethical conundrum is whether the observer should do nothing, allowing the trolley to kill the five workers. Or push the person from the bridge, killing him alone. (Might a person choose, instead, to sacrifice himself for the greater good by leaping from the bridge onto the track?)

The ‘utilitarian’ choice, where consequences matter, is to redirect the trolley and kill the lone worker — or in the second scenario, to push the person from the bridge onto the track. This ‘consequentialist’ calculation, as it’s also known, results in the fewest deaths. On the other hand, the ‘deontological’ choice, where the morality of the act itself matters most, obliges the driver not to redirect the trolley because the act would be immoral — despite the larger number of resulting deaths. The same calculus applies to not pushing the person from the bridge — again, despite the resulting multiple deaths. Where, then, does one’s higher moral obligation lie; is it in acting, or in not acting?

The ‘doctrine of double effect’ might prove germane here. The principle, introduced by Thomas Aquinas in the thirteenth century, says that an act that causes harm, such as injuring or killing someone as a side effect (‘double effect’), may still be moral as long as it promotes some good end (as, let’s say, saving five lives rather than just the one).

Empirical research has shown that redirecting the runaway trolley toward the one worker is considered an easier choice — utilitarianism basis — whereas overwhelmingly visceral unease in pushing a person off the bridge is strong — deontological basis. Although both acts involve intentionality — resulting in killing one rather than five — it’s seemingly less morally offensive to impersonally pull a lever to redirect the trolley than to place hands on a person to push him off the bridge, sacrificing him for the good of the many.

In similar practical spirit, neuroscience has interestingly connected these reactions to regions of the brain, to show neuronal bases, by viewing subjects in a functional magnetic resonance imaging (fMRI) machine as they thought about trolley-type scenarios. Choosing, through deliberation, to steer the trolley onto the side track, reducing loss of life, resulted in more activity in the prefrontal cortex. Thinking about pushing the person from the bridge onto the track, with the attendant imagery and emotions, resulted in the amygdala showing greater activity. Follow-on studies have shown similar responses.

So, let’s now fast forward to the 21st century, to look at just one way this thought experiment might, intriguingly, become pertinent to modern technology: self-driving cars. The aim is to marry function and increasingly smart, deep-learning technology. The longer-range goal is for driverless cars to consistently outperform humans along various critical dimensions, especially human error (the latter estimated to account for some ninety percent of accidents) — while nontrivially easing congestion, improving fuel mileage, and polluting less.

As developers step toward what’s called ‘strong’ artificial intelligence — where AI (machine learning and big data) becomes increasingly capable of human-like functionality — automakers might find it prudent to fold ethics into their thinking. That is, to consider the risks on the road posed to self, passengers, drivers of other vehicles, pedestrians, and property. With the trolley problem in mind, ought, for example, the car’s ‘brain’ favour saving the driver over a pedestrian? A pedestrian over the driver? The young over the old? Women over men? Children over adults? Groups over an individual? And so forth — teasing apart the myriad conceivable circumstances. Societies, drawing from their own cultural norms, might call upon the ethicists and other experts mentioned in the opening paragraph to help get these moral choices ‘right’, in collaboration with policymakers, regulators, and manufacturers.

Thought experiments like this have gained new traction in our techno-centric world, including the forward-leaning development of ‘strong’ AI, big data, and powerful machine-learning algorithms for driverless cars: vital tools needed to address conflicting moral priorities as we venture into the longer-range future.

26 May 2015

How Google and the NSA are creating a Jealous God

Posted by Pierre-Alain (Perig) Gouanvic




Before PRISM was ever dreamed of, under orders from the Bush White House the NSA was already aiming to “collect it all, sniff it all, know it all, process it all, exploit it all.” During the same period, Google—whose publicly declared corporate mission is to collect and “organize the world’s information and make it universally accessible and useful”was accepting NSA money to the tune of $2 million to provide the agency with search tools for its rapidly accreting hoard of stolen knowledge.
-- Julian Assange, Google Is Not What It Seems

Who is going to process the unthinkable amount of data that's being collected by the NSA and its allies? For now, it seems that the volume of stored data is so enormous that it borders on the absurd.
We know that if someone in the NSA puts a person on notice, his or her record will be retrieved and future actions will be closely monitored (CITIZENFOUR). But who is going to decide who is on notice?

And persons are only significant "threats" if they are related to other persons, to groups, to ideas.

Google, who enjoyed a close proximity with power for the last decade, has now decided to differenciate Good and Bad ideas. Or, in the terms of the New Scientist, truthful content and garbage.
The internet is stuffed with garbage. Anti-vaccination websites make the front page of Google, and fact-free "news" stories spread like wildfire. Google has devised a fix – rank websites according to their truthfulness.
Google's search engine currently uses the number of incoming links to a web page as a proxy for quality, determining where it appears in search results. So pages that many other sites link to are ranked higher. This system has brought us the search engine as we know it today, but the downside is that websites full of misinformation can rise up the rankings, if enough people link to them.
Of course, it is not because vaccine manufacturers are exonerated from liability by the US vaccine court that they are necessarily doing those things that anti-vaccine fanatics say. Italian courts don't judge vaccines the same way as US courts do, but well, that's why we need a more truthful Google, isn't it?

Google will determine what's true using the Knowledge-Based Trust, which in turn will rely on sites "such as Snopes, PolitiFact and FactCheck.org, [...] websites [who] exist and profit directly from debunking anything and everything [and] have been previously exposed as highly partisan."

Wikipedia will all also be part of the adventure.

What is needed by the intelligence community is an understanding of the constellation of threats to power, and those threats might not be the very useful terrorists of 9/11. What is more problematic is those who can lead masses of people to doubt that 19 novice pilots, alone and undisturbed, could fly planes on the World Trade Center on 9/11, or influential people like Robert F. Kennedy who liken USA's vaccine program to mass child abuse.

These idea, and so many other 'garbage' ideas, are the soil on which organized resistance grows. This aggregate of ideas constitutes a powerful, coherent, attractive frame of reference for large, ever expanding, sections of society.

And this is why Google is such an asset to the NSA (and conversely). Google is in charge of arming the NSA with Truth, which, conjoined with power, will create an all-knowing, all-seeing computer-being. Adding private communications to public webpages, Google will identify what's more crucial to 'debunk'. Adding public webpages to private communications, the NSA will be able to connect the personal to the collective.

And this, obviously, will only be possible through artificial intelligence.

Hassabis and his team [of Google's artificial intelligence program  (Deepmind)] are creating opportunities to apply AI to Google services. AI firm is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google's services [such as truth delivery?]. It could enhance YouTube recommendations for users for example [...].

But it's not just Google product updates that DeepMind's cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team's advances could be what finishes off the human race. He told the LessWrong blog in an interview: 'Eventually, I think human extinction will probably occur, and technology will likely play a part in this.' He adds that he thinks Artifical Intellgience is the 'No.1 risk for this century'. It's ominous stuff. [ You can read more on that here..]

May


help us.