Photo by Riccardo: pexels

From bullock carts to motorised cars to the most technically advanced autonomous vehicles, the automobile industry has come a long way in making transportation more accessible and convenient. In the late 1930s, self-driving cars were first invented, which is electric vehicle guided by radio-controlled electromagnetic fields generated with magnetised metal spikes fixed in the roadway. Over the decades, there has been a significant advancement in technology and upcoming vehicles are now furnished with advanced sensors, detectors, algorithms and many more features that make them highly sophisticated and more capable than ever. We have been trying to improvise transportation conditions for centuries for accessible transportation and communication. One such fascinating technology currently revolutionising the automobile sector is SELF-DRIVING CARS.

Self-driving cars, also known as autonomous vehicles (AVs), are one of the most rapidly developing technologies with good potential to transmute the transportation sector. These are furnished with sophisticated sensors, fine-tuned detectors and algorithms that enable them to conduct tasks without human intervention. These cars have better safety features, reduce traffic, increase efficiency, and minimise human dependency. No matter the disability, age, or ability to drive the car, one can reach the destination. This is where primary ethical considerations surrounding AVs originate. Self-driving vehicles show significant ethical dilemmas that must be addressed now. As AV technology advances, severe ethical concerns are surfacing. Many questions have been raised regarding this issue; who is accountable when AVs get into a collision? How do you deal with AVs during high-stake manoeuvres?

Prioritising Vs Programming:

When a human driver gets into an accident-prone situation, they don't make an analytical or calculated move to defend the problem. They make spontaneous decisions in some instances. But the trolley problem is one of the most critical moral conundrums posed by self-driving cars. This problem is a thought experiment that challenges participants to select between two possibilities in an imagined situation. A trolley is approaching a group of people in the scenario, and you can move it to a different track sparing the throng but injuring one person on the way. An algorithm cannot make an instant decision and needs some processed understanding.

First and foremost, the algorithm tries to avoid dangerous situations; accidents can never be 100% prevented, especially when self-driving vehicles are driving alongside human drivers. In some cases, no matter what, there would be some significant chances of an injury; how do AVs prioritise it? Here, the car must make a decision that equally treats the safety of the passengers and pedestrians.

Accountability:

The question of liability raises a primary philosophical argument, i.e., man versus machine. Another concern is who is responsible when machines make mistakes or cause harm. This issue has become particularly acute with the development of autonomous vehicles and other advanced technologies capable of making decisions independently. If an autonomous car is met with an accident, who is responsible? Who gets to make a decision that affects lives inside and outside the vehicle? To be liable is complicated because AVs are designed with sensors and detectors, and accidents may happen beyond their control and algorithm to understand the situation. An AI doing the driving is merely software carrying out its programming- the program itself cannot be held responsible for any accidents. Moreover, machines can never fully replace the human element, and many aspects require human judgment, compassion, and creativity.

The feel of driving:

As humans, we tend to feel things. We wish to judge our work and get a sense of what has happened. AVs automatically do things according to their features and let us reach our destination without our intervention. As humans, we feel that we should drive the car to reach our destination, not that it goes automatically to reach our destinations. AVs lack humanities and psychology, and it's not more straightforward to teach in the form of algorithms. In addition to their limitations regarding empathy, intuition, and emotional intelligence, machines cannot replicate the full spectrum of human judgment, compassion and creativity. Though machines have many valuable capabilities, they can never fully replace the unique qualities that make us human. As we continue to develop and use technology, we must ensure that we do so in a way that recognises and preserves the essential role of humanity in all aspects of our lives.

Cultural diversity:

Sometimes even if we wish to prioritise one among the passengers or pedestrians in a demanding position, culturally, it's different over the globe. Everyone tries to opine on their matter of understanding and fact, and that leads to diversified algorithms over cultures. In some areas, women are prioritised over men; in some areas, they are younger than elders and, in some, the other way around. Hence, it's a matter of morals that aren't consistent worldwide. "People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules," says study co-author Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge. These subjective biases can't be trained to an AV and design culturally.

Jobs security:

As technology is modernising, self-driving cars have become a reality, which questions the job opportunities of drivers and driving licences. They have a great potential to displace millions of drivers working for their livelihood, which can devastate individuals, families, and communities. The displacement of human workers by machines raises arguments about social justice and fairness. Those who lose their jobs may find it challenging to find new employment or may be forced to take lower-paying, less secure jobs. Moreover, it also psychologically affects humans as they lose their jobs and feel purposelessness and might get into depression and anxiety issues. Another ethical concern is the impact on the dignity of work. Work gives individuals a sense of identity, purpose, and social connection. Machines do not have the same moral agency and responsibility as humans. They cannot make moral judgments or act on ethical principles.

Hacking and data privacy:

These AVs intake vast amounts of information through sensors and detectors, which includes surroundings data, video data, people's data etc. The privacy and security of collected and stored data must be maintained sensitively. Data shouldn't be easily accessible to all. There should be specific criteria to get the data. Simultaneously, another question arises regarding putting human data in the hands of an AI, which may not be emotionally tremendous, but it's technically astounding. There comes another problem called "Hacking". Highly professional but illegal hackers who do all these illicitly have figured out how to hack and take control of these AVs. This is the biggest concern. There might be many lifethreatening things happening or any other sort of thing to claim huge benefits from the car owner. From an ethical standpoint, this is no lesser than cybercrime or murdering people.

Other morality concerns:

At present, though self-driving cars are being introduced, we still find regular vehicles. This mixed usage situation is very critical and tough to handle. Ideally, if only self-driving cars were present, we could ensure laws and punishments according to mistakes caused and what happened. Humans are always in the loop of AVs. In some more harrowing situations, humans need to have decision policies, but we do not have access to operate the car; it's completely automated. Due to its automation, a person in the driving seat possesses negligence in observing their surroundings. An AV can't signal a man not following the rules, and it automatically hits him. In general, we give caution to save them. No standards, ethics, or considerations are taught to an AI. An AV always tries to avoid more challenging situations, and one cannot have complete assurance of not having any complex problems.

To account for these issues, manufacturers and judiciary communities must come up and try to resolve the problems at the earliest. Some of the ideas mentioned here are solutions to a few of the above-addressed issues. One of the potential solutions is to program ethical decision-making into self-driving cars. This would require designing algorithms that prioritise the safety of every human. It should consider the number of passengers travelling, the movement of pedestrians on the road etc. Government has to intervene in the issues related to insurance liability and law issues. Manufacturers of self-driving cars must carry liability insurance; this would ensure a clear path to compensate the victims of accidents involving self-driving cars. As technology advances, people need to be technically inclined, not just what they know. This ensures the replacement of the job with another. They should undergo training including computer programming, data analysis and robotics and their other field, which doesn't require in-depth knowledge of everything.

On the other hand, every advancing technology has pros and cons. One cannot avoid this revolutionary technology due to a few different reasons. To ensure data privacy and security, manufacturers and the Government should enact data privacy regulations that require manufacturers to obtain consent from the user before collecting and using their data and data encryption; it should be protected from harmful and unauthorised access. Driver monitoring systems need to be available in more harrowing situations so that drivers can judge what is to be done over an AI.

Building an AV requires much research, development and collaboration between many committees and every minute detail is meticulously tested. Algorithms developed by people are getting more developed and closer to what humans think and do. A report by McKinsey and Company in 2015 shows that self-driving cars will dramatically decrease car accidents by up to 90%, preventing up to some thousand million dollars worldwide, which would require damages and health costs annually. Self-driving cars are never distracted by phone calls, nor do they drive drunk. They have tremendous potential to observe their surroundings than humans. Research has shown that autonomous car accidents are much rarer than humancaused accidents. Being humans, it is tough to tolerate mortality rates caused by machines or AI over human errors. We must start accepting that they reduce accidents significantly and accept these revolutionary technologies. At the same time, self-driving cars can lessen unproductive, stressful driving time. Productivity increases for those who wish to do some chores while travelling.

Conclusions:

The ethical considerations surrounding autonomous cars have been fiercely debated. The law states that humans are fully responsible for their actions when they drive a car. However, with self-driving cars, there is ambiguity as to who is liable for any harm caused to innocent bystanders. Human drivers and self-driving cars must make split-second moral choices when involved in accidents, but the moral grey area has led to many ethical debates with self-driving cars.

Those opposed to self-driving cars argue that car accidents should be natural occurrences, not predetermined by algorithms. They also contend that it is unethical for others to decide one's destiny, which is what happens when drivers relinquish control over autonomous cars. Additionally, the amount of data collected and stored raises concerns about privacy and the potential for hackers to cause harm.

However, the benefits of autonomous cars outweigh the negatives. They are developed by some of the most innovative and educated people in society who are working to create a better world. Numerous studies have evidenced that selfdriving cars are significantly safer than human drivers. They will also increase efficiency and productivity for people worldwide.

To create ethical autonomous vehicles, developers should learn from past experiences in risk management and morally challenging situations. Companies and self-driving car owners should understand their responsibility for the safety of all stakeholders, and risk management techniques can be used to quantify probabilistic risk transparently and flexibly.

As more laws and regulations are developed regarding autonomous cars, they will work to balance the ethics and economics of self-driving cars. Ultimately, allowing self-driving cars will satisfy the expectations and values of enthusiasts, drivers, and companies as science and technology advance. However, humans must introduce better laws to be aware of potential harm.

.    .    .

Bibliography:

Discus