The Intersection of Jus in Bello and Autonomous Weapons Systems

0
149

Abstract

Autonomous weapons systems (AWS) are becoming increasingly prevalent on the battlefield, and their role in war is growing. This paper will analyze the trend of increasing use of lethal autonomous unmanned systems in warfare, and weigh its potential ethical impacts. Some believe that autonomous robots could be able to perform more ethically than humans on the battlefield because humans often fall short of moral behavior during combat and fail to comply with the existing laws of war. However, others dispute this view and claim that AWS is ultimately unethical to use in war due to the relative unfamiliarity of these machines, as well as their ability to malfunction in dangerously unpredictable ways. In this paper, I will examine the impact of AWS specifically on the concept of just war theory. I will argue that while AWS does present new challenges for waging war by the principles of jus in bello (justice in war), these challenges can be resolved so that it could be ethical to use these systems. I will show this by first establishing the complications AWS poses for the three core principles of jus in bello, then proposing guidelines that can resolve these complications in some cases, discussing the additional issue of accountability for actions taken by AWS on the battlefield, and finally expressing the correlation of my investigation to a few universal ethical theories.

Introduction

In the current year of 2025, there have been multiple ongoing wars and escalating armed conflicts around the globe. This gradually heightening international tension prompts many to consider and ponder what lengths these countries will go to in order to fully succeed in their military endeavors, leading to the reference to AWS. Examples of lethal AWS have already existed for quite some time such as the Trophy Active Protection System (APS) in Israel1. This type of APS has a high degree of protection range, a high detection rate of threats along with engagement from various distances, and an extremely high kill probability (proven over 90% in some cases)1. Given this data, the public has expressed concern about the ethicality of implementing robots that can unrestrictedly choose who to kill and when to gun down humans on the battlefield. Entering a new realm of technology without proper training data for the algorithm, trials for testing, or additional research could seem risky and alarming to citizens of nations in which these practices are far from commonplace. Just war theory is a philosophical concept that describes the justified means for a nation to go to war, righteous moral conduct during war, and the production of the most ethical aftermath of the war. A topic that I wanted to explore was how exactly AWS might complement or defy this theory. In this paper, I will discuss the overlap of the fundamental principles of jus in bello—a category of just war theory—and the ethical deployment of AWS in the context of warfare. To preface, this paper will focus on the ethical implications of AWS to investigate what would or would not constitute ethical and moral use, and will not be focusing on the technical aspects and how realistic they are currently with the moral criteria. Going into a detailed explanation of technological feasibility is beyond the scope of this research, and instead it is needed to establish the ethicality of such guidelines first. Whether they are or are not technically realistic, the paper prioritizes the discussion on how the ethical standards should be.

Background

The principles of just war theory can be sorted into three main categories based on the three phases of warfare: starting a war, conducting a war, and ending a war. Jus ad bellum consists of the ethical and legal standards that must be met in order to start a war, such as just cause, legitimate authority, and last resort2. Jus in bello consists of standards that ensure that the actions taken during the war are legally and morally permissible by its three principles of discrimination, necessity, and proportionality2. Jus post bellum consists in respecting the principle of discrimination, respecting the rights of the defeated, making proportional claims on the defeated, and potentially reeducating a defeated aggressor3. In this paper, I will specifically be focusing on the impact of AWS on jus in bello, as the implementation of AWS has more to do with events during the war, not the causes of war or the terms that end it.

As mentioned before, the three core principles of jus in bello are discrimination, necessity, and proportionality. Discrimination means that combatants must always distinguish between military objectives (soldiers of the opposing side) and civilians. They must intentionally target only military objectives and try not to harm innocent citizens in the process2. Necessity means that there must be no other less harmful way to achieve the justified goal, and that when carrying that goal out, only the least harmful means must be used2. Lastly, proportionality means that predicted but unintended harms must be proportionate to the military advantage achieved. The outcome should produce benefits to war that outweigh the relative harms when exercising military strategies2.

What is AWS?

Autonomous weapons systems are a unique category of devices that use computer algorithms and sensors to autonomously seek out targets and eliminate them without the direct intervention of humans4. The intended advantage of this tool is for it to be implemented in cases where traditional military operations may be limited or not feasible. Up to the present, the United States does not have records of AWS in its army inventory, however many high-ranked military leaders speculate that it will be inclined to produce AWS in the near future if its foreign competitors begin to do so4. Additionally, the U.S. policy has not yet banned the development or employment of these systems, causing controversy around the ethical affairs associated with AWS and a lack of support surrounding their decision4

Current Regulatory Policies

In terms of the current U.S. Policy on AWS, the Department of Defense Directive (DoD) 3000.09 established in 2012 was recently updated in 20234.  Many elements are outlined in the guidelines in order to demonstrate the government’s efforts to maintain the ethical integrity and legal safety of the machines. For example, the DoD strives to promote equitability by minimizing unintended bias in the design of AI abilities. Furthermore, senior-level review by highly-positioned officials in the Pentagon must be conducted in the approval process for AWS before it can advance5. Systems must also be rigorously tested in realistic operational settings. Finally, DoD makes a distinction between fully autonomous weapons (humans “out of the loop”), such as LAWS, human-supervised autonomous weapons (human “on the loop”), and semi-autonomous weapons (human “in the loop”)5

Since 2012, they have updated and made changes to the policy in 2023, but these are not extremely significant and still retain, if not exacerbate, ethical loopholes that were in existence. To start, the DoD did not take advantage of the opportunity to clarify ambiguous terminology in the directive. Many key terms such as “appropriate levels of human judgment” remain undefined, leading to uncertainty in implementation and perhaps too much room for interpretation6. Where should the line be drawn between appropriate levels of interference? Who decides what is considered appropriate, too relaxed, or too paternalistic? The word “control” was also removed several times from the new directive and replaced with “judgment” instead, which the officials claimed was for “technical accuracy” due to the confusion around the similarities and differences between control and judgment. This removal thus weakened emphasis on physical human oversight, and left questions still unanswered about what appropriate levels of human judgment were to look like6. Additionally, the directive did not change the ability for high-level DoD officials to waive senior review under certain circumstances, such as “urgent military need.” This could allow AWS to bypass oversight even when the situation isn’t truly urgent, which would weaken the entire review system and cause waivers to become overused over time. On top of this, the transparency amongst the policies is a concern, as there is no clear evidence that any weapons system has undergone the mandated senior review process since 2012, when the policy was first enacted6.

Challenges to Principles

When these AWS come into play on the real battlefield, the three key principles of jus in bello face complications in many different aspects. I will now analyze each one in detail.

For instance, it is clear that AWS lacks feeling and emotional judgment. Powerful emotions such as fear and hysteria often cloud the judgment of human combatants when on the battlefield and can cause violent and unreasonable behavior at the moment, so robots are not at this clear disadvantage7. However, this lack of emotions may also cause elevated rates of target misidentification because AWS will then also lack the contextual awareness to identify differences between targets. With external factors that are uncontrollable and unpredictable in the moment such as if people are in uniform, not in uniform, armed, unarmed, or their behavioral response, AWS may sometimes be incapable of distinguishing between combatants and noncombatants8. This can lead to violations of the principle of discrimination. Moreover, AWS may not retain the capacity to determine when to engage and when to refrain from engagement in conflict based on the context (i.e. surrendering soldiers or civilian harm), also presenting pivotal concerns.

Another concern is that recognizing how to accomplish objectives while sticking to the requirements of minimal force can often be difficult with AWS in unpredictable situations. Instances where non-lethal force is required of combatants can be problematic for AWS to navigate. This entangles the principle of necessity. For example, given the adaptive nature of AI in AWS, which learns and adapts through interactions with its environment, it remains uncertain whether these systems will be able to consistently interpret situations in whether or not incapacitating or killing the enemy would be more suitable, as their behavior may change over time and lead to unanticipated consequences that contradict the intended military objective9. Another case could involve instances of property damage or destruction, where AWS may fail to execute the necessary minimal force or appropriately evaluate how to limit the demolition of civilian structures, such as small villages, towns, and homes. Additionally, although technology can seem to be used to predict what AWS will do in certain situations on the battlefield, due to the irregularity and uncertainty of many scenarios, technical predictability cannot render the exact actions or outcome of AWS totally foreseeable.8. The common unpredictability of situations on the battlefield excludes the possibility of having a full understanding of AWS’s behaviors and abilities. 

Combining these plausible issues of behavioral response and minimal force leads to a hindrance to the third principle of proportionality. There are doubts as to whether or not AWS can balance the relationship between military advantage and morality. Their abilities to perform more rapid calculations on the battlefield can reduce collateral damage overall, but can also increase unintended and unnecessary lethality due to their lack of hesitation. Furthermore, since the AWS are not capable of contextual interpretations, in specific situations, the pre-set algorithms about utility, harm, and proportion cannot aid them in making the “right” decision8. The best ethical proportion will be difficult to maintain for various scenarios because a “one-size fits all” algorithm cannot apply to every case that the AWS will face.

Additionally, it should be acknowledged that proportionality is not just about the physical effects of an action, but also about its moral and strategic context. When considering actions to take on the battlefield, it is vital to not only recognize the immediate physical harm, but also the consequences and risks in the future on an international and diplomatic scale for the United States. Whether AWS can incorporate the broader consequences of an attack in their proportionality assessments, such as the human cost of war or the potential long-term repercussions of specific military actions is a crucial concern that currently remains unknown. From a wider scope, it would seem difficult to integrate political sensibility in an AWS because many decisions are circumstantial, unpredictable, and instantaneous, leaving the independent AWS (or its human oversight) not much time to gather knowledge about the complete impact of their judgment. In reality, we cannot hold AWS to standards that we do not hold humans to in current practices. Within war prior to the AWS system, we did not expect individual human soldiers or military units to calculate complex political and psychological impacts either. We do not hold them responsible for these capabilities before AWS was developed, thus it would be very difficult to require this of AWS that are newly evolving machines. We can, though, ensure that AWS can accomplish the same tasks as humans at the minimum, and will hopefully work towards shaping them to ultimately perform better on elements they need to judge, eventually even taking things into account that humans cannot.

Existing Military Testing Programs

Military testing programs assess multiple APS, their objective being to improve the survivability of ground combat vehicles by using “hard kill” technology to defeat incoming threats such as anti-tank guided missiles, rocket-propelled grenades, etc. As part of the European Deterrence Initiative in 2017, the U.S. army tested three Non-Development Item (NDI) APS, including the Rafael Trophy APS mentioned previously10. Focusing on the Trophy APS (installed for the Army Abrams M1A2 and Marine Corps M1A1 tanks) in its Phase I testing10, it demonstrated improved protection over existing systems and successfully countered most tested threats in basic range conditions. However, certain testing limitations, such as reliance on contractors from Rafael and its U.S. partner DRS due to the Army’s limited knowledge of the foreign APS, as well as the use of non-fielded armor (testing was conducted on a ballistic hull and turret rather than a fully equipped, combat-ready Abrams tank), prevented a full assessment of survivability and force protection10. Phase II aimed to simulate more operationally realistic testing in order to test real-world functionality.

With these details of Trophy’s Phase I assessment, there are already some evident alignments with the principles of jus in bello, along with potential conflicts. For example, Trophy APS aligns with discrimination because the system does not actively target entities; it is an Active Protection System, meaning that it is strictly defensive and only engages when incoming projectiles pose a threat to the vehicle it is protecting. Therefore, it does not have a chance to discriminate between combatants or non-combatants, only acting based on provocation. Next, Trophy APS aligns with necessity because it only uses the necessary force to blockade the threat or retaliate when directly attacked instead of being offensive and causing excessive destruction around it. And again, not initiating and only responding to attacks allows APS to maintain high proportionality of military benefits to relative harms. However, it complicates discrimination and proportionality because its defensive strategies could still cause collateral damage to civilians and cause unintended out-of-proportion harm. It also cannot distinguish who the threat is coming from or the magnitude of the threat, complicating the minimal harm ideal around necessity as well. The system fires based on detection and does not change based on circumstances, which could be troublesome because of the lack of flexibility.

Some APS use soft-kill methods, which are non-physical interceptions (i.e. electronic disabling, radio frequency)11 that would help resolve conflict with the principles because soft-kill strategies avoid using lethal force, prevent unnecessary collateral damage to non-combatants, and truly make physical destruction a last resort. This is a step towards combining the jus in bello principles with APS and AWS capabilities.

General Objections

Now I will turn to the question of whether or not it is acceptable to use AWS at all as a tool during war. Numerous challenges have been identified, and some people doubt if it would be ethical to use them in the first place. Moreover, they worry about the effects of too much reliance on AWS and the implications of allowing them to make fatal decisions. It is argued that increased reliance on robots might reduce the moral agency of human soldiers, who then may begin to defer critical ethical decisions to machines12. This could lead to a desensitization to violence, and a diminished sense of personal responsibility for cruel actions taken in war. Widespread use of these technologies can create “numbed killing” that, while increasing the ability to kill, can also decrease sensitivity to the fact that death of human beings was the end result7. This can quickly desensitize individuals who then pretend that they are not, or do not even realize that they are, killing human beings, triggering no remorse or consideration of the consequences of their actions. Furthermore, entrusting robots with the power to kill without human oversight removes the human element from critical ethical decisions. Robots also cannot fully grasp the moral gravity of their actions and its real-time impacts on society, which raises concerns about whether it is ethical to delegate the responsibility for taking human lives to machines12.

With these objections kept in mind, I believe there is still a chance that the challenges to the principles and the mentioned concerns do not completely rule out the possibility of the implementation of AWS on the battlefield.

Ethical Standards (Re-addressing the Challenges to Principles)

If AWS were to remain a tool on the battlegrounds, they would need to adhere to a set of standards in order to avoid the main challenges to the principles of jus in bello. First, the base algorithm of the AWS must be able to distinguish between combatants and non-combatants. There should be little to no errors in terms of identification, and this standard should be put in place to ensure that AWS are discriminatory during war conflict. Second, the algorithm must be able to identify contexts in which lethal force is not necessary. It may be puzzling to find a way in which AWS could develop contextual awareness in the moment, however this is extremely important when considering typical circumstances in which non-lethal force is needed, such as imprisoning soldiers, or recognizing when soldiers are surrendering. Finally, the algorithm must be able to judge proportionality by also calculating the effects of an engagement from a moral viewpoint as opposed to a pragmatic viewpoint seeking to emphasize tangible military gain. In warfare situations, there is no one-size fits all or one-threshold of force required for all: it is a dynamic setting, and the program must cater to evolving circumstances, as well as evolving definitions of “value”, like the value of human life versus moderate military advantage8.

Re-addressing the General Objections

Noting these suggestions for the required standards of AWS, will the original objections still apply? I believe that if there was an AWS that had all of the features, then it could be ethical to use it despite the general objections mentioned in a previous section. For example, if AWS are programmed to have contextual awareness, they will act ethically, so human combatants that handle the appliances will not experience desensitization to uncalled-for violence on the battlefield because this will now no longer be an occurrence, or at least be a very infrequent one. If AWS truly meets these requirements and becomes an ethical tool, there will also not be a problem of reliance on the technology, as it will no longer be dangerous to the welfare of the soldiers or the citizens in the midst of the war. Lastly, entrusting robots with the power to kill will not be so risky because they will act in a way that will alleviate our doubts about lack of human oversight because robots would be able to factor moral considerations into their decisions on the battlefield.

Accountability 

When discussing the role of AWS on the battlefield, it is just as crucial to consider the consequences after the war, such as who or which entity is held accountable for war crimes that AWS committed during the war. There are two scenarios in which accountability can be judged differently: generally expected conduct of AWS, and unexpected conduct of AWS. I will review and explain both in the following sections.

For generally expected conduct of AWS, the party that is mainly responsible for the actions of AWS during the war is fairly undisputed. Just as if the AWS were human combatants, the military commander would make the major decisions, and the combatants would act on their behalf by carrying out the orders. If an individual soldier committed malicious acts while they were simply effectuating their military commander’s instruction, court systems would most likely place the fault on the commander. This is because although individual soldiers act with some autonomy, the general positions and target area are established by their commanders, and higher-up officials. Therefore, it would only make sense that this would also be the case when using AWS on battlefields. This is also more evident because AWS have even less autonomy than individual soldiers do, as they do not have a mind of their own and are programmed with little ability for decision-making without the influence of an external force.

In terms of unexpected conduct of AWS during war, for instance due to a malfunction, it is more complex to determine who assumes the rightful responsibility. In some cases, military commanders might be held accountable on the principle of negligence. With human combatants, if the commander had prior knowledge that a combatant was unwell, or suffering from mental health issues such as emotional trauma, then when that combatant acted unexpectedly or deviated from the orders, the military commander might be held accountable by still putting them in a position on the battlefield with knowledge of their unfitness to serve in the war. Similarly, if the commander had prior information that some AWS were malfunctioning and still chose to allow them to be utilized on the battlefield, they could be responsible for the damage that the AWS did. However, this becomes increasingly tricky when the errors of the AWS are completely unexpected, and the system malfunctions at the moment. This can be compared to a very similar innovation in the present world: self-driving cars (SDC). Say an SDC suddenly malfunctioned in a crowded intersection and crashed into innocent civilians on the sidewalk. Who would take the main responsibility for this unpredictable accident? Firstly, blaming the programmer would be unethical because people should not be held responsible for something they cannot directly control. The programmer created the basic algorithm with benevolent intentions of the car being functional to society, and since it was an unexpected incident, no direct manipulation took place. It would also be unethical to place the responsibility on the driver of the SDC (or the person in the driver’s seat) who gives the car orders, such as its final destination, or which route to take. This is because, in the case of a sudden accident, drivers cannot be reasonably expected to pay enough attention to the car’s actions or react quickly enough in emergency situations13. Moreover, drivers who crash are not doing anything fundamentally different from other drivers whose cars do not crash, and thus, it would not be just to assign blame based on the occurrence of an accident alone13. Both drivers are sitting in the driver’s seat while giving full control to the SDC; it may just be a matter of luck. The programmer and driver of self-driving cars can be analogous to the programmer and military commanders of AWS. Hence, they cannot be held accountable for unexpected conduct of AWS on the battlefield.

Therefore, the stakeholders that must take accountability for the unexpected conduct of AWS should be the manufacturers that released the technology. Similarly to cases with an SDC, this is due to the fact that the enterprises put the technology up for sale without testing for all scenarios that might experience a malfunction, which could end up threatening countless lives. If the system is malfunctioning in a way that causes unnecessary danger to humans, even if they are active combatants, it clearly has not been extensively tested or strongly developed enough. By putting the AWS on the market despite these potential risks, the companies would be held accountable and liable for the damage done, and any related consequences. It is only ethical that this stakeholder assumes full responsibility for events that could very possibly have been avoided with more thorough research and diligent training. 

It can, though, be recognized that these types of autonomous technology have different settings, as self-driving cars are typically placed in civilian applications, often locally, while AWS occupy the context of intense combat and can have severe global political and social ramifications if unstable. In these ways, the use of these systems are unique from each other when analyzing the long-term impacts; however, its purpose in these sections is to provide an analogy surrounding the significance of accountability. SDC and AWS are analogous when discussing inquiries about blame and consequences to the different parties involved.

Connections to Ethical Theories

When analyzing challenges and impacts of just war principles, many familiar patterns emerge that can be generalized to ethical theories such as consequentialism and deontology. Consequentialism is the ethical framework that judges the ethicality of an action solely based on its outcomes or consequences. A narrower focus of consequentialism is the framework of utilitarianism, which determines that actions are ethical if they produce the greatest amount of good for the greatest number of people. It may have the potential to desensitize individuals in contact with AWS, but for the immense good that it could produce with elevated accuracy and efficiency in weapons systems, reduced military casualties, and removal of human error, a utilitarianist would support the operation of AWS in war. Another pertinent framework is that of deontology, which judges the ethicality of actions based on their adherence to a set of inviolable moral rules, principles, or duties. A deontologist might believe that the wellbeing of certain human lives should not be valued over other lives, therefore using AWS is ethically impermissible. Moreover, machines and external forces killing humans without oversight could be interpreted as inherently morally unjust because it diminishes the respect for human dignity. However, with the same deontological framework, there is a fixed duty of a nation’s government to uphold the social contract, and regulate AWS to act ethically in order for it to be ultimately functional and implemented in order to protect and put the needs of their own citizens first.

As demonstrated, there are compelling arguments for both perspectives on the discussion justified by frameworks and are applicable to my prior research and synthesis. If my argument functions or is compatible with general ethical theories like such, those who align with these ethical frameworks may be inclined to find my research and examination plausible.

Conclusion

The objective of this paper was to analyze the concept of just war, particularly jus in bello, and consider whether AWS intervention induces shortcomings within the theory. I determined that it is possible for AWS to exist in accordance with the principles of jus in bello if altered to ensure its ethical function on the battlefield. In addition, I tackled the question of accountability for AWS malfunctions with both predictable and unpredictable conduct, finishing off with a generalization of my analysis to conventional ethical theories. Limitations that are applicable to this paper are the confined range of perspectives that were examined. Though I did try to integrate a wide variety of viewpoints, it is possible that there may be unique standpoints that I did not explore and/or contest. I recognize that this may alter the audience’s full scope of understanding of the ethical subject. Technology is constantly evolving, and rapid development presents the responsibility to ensure that its appliance is ethical. It is important to address concerns and doubts that we may have now so that an even keener ethical outlook is incorporated into studies in the future. There are many more ethical dimensions to explore, some that have been breached in the breadth of this paper, and others that have not yet are deeply intriguing and deserve further investigation. I briefly mentioned the social and political impacts of AWS use, and the potential effects on the morale of human soldiers such as desensitization to weaponization and numbed killing. Other topics such as public perception on AWS affairs and geopolitical implications of an arms race in autonomous technologies if this tool is to become more widespread are just as important to consider when setting and defining the blueprints for AWS. Further research conducted could focus on uncovering even more objections to the principles of jus in bello. It could also analyze a different lens on how AWS intertwines with the principles of jus ad bellum or jus post bellum, the other two phases of war that concern just war theory. Lastly, as disclosed in the introduction, though it is beyond the scope of this paper, the technological means of these ethical adjustments are also extremely valuable to assess. This could be a principal dedication of future research, concentrating on the technical practicality of these ethical standards and if their implementation is to be effective based on the realistic needs of AWS.

References

  1. RAFAEL. “Trophy APS: Active Protection System Revolutionizing Ground Maneuver Operations.” Rafael Advanced Defense Systems, 11 Apr. 2024, www.rafael.co.il/blog/trophy-aps/. Accessed Sept. 2024. [] []
  2. Lazar, Seth, “War”, The Stanford Encyclopedia of Philosophy (Spring 2020 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/spr2020/entries/war/. Accessed Sept. 2024. [] [] [] [] []
  3. Moseley, Alexander. “Just War Theory.” The Encyclopedia of Peace Psychology, 15 Dec. 2011, https://doi.org/10.1002/9780470672532.wbepp144. Accessed Sept. 2024. []
  4. Sayler, Kelley. “Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems.” Congressional Research Service, 1 Feb. 2024, crsreports.congress.gov/product/pdf/IF/IF11150. Accessed Sept. 2024. [] [] [] []
  5. Danzin, Cyrielle, et al. “United States, Use of Autonomous Weapons.” How does law protect in war?, ICRC, Geneva, 2014, casebook.icrc.org/case-study/united-states-use-of-autonomous-weapons. Accessed Feb. 2025. [] []
  6. Human Rights Watch, et al. “Review of the 2023 US Policy on Autonomy in Weapons Systems.” Human Rights Watch, Feb. 2023, www.hrw.org/news/2023/02/14/review-2023-us-policy-autonomy-weapons-systems. Accessed Feb. 2025. [] [] []
  7. Arkin, Ronald C. “The Case for Ethical Autonomy in Unmanned Systems.” Journal of Military Ethics, vol. 9, no. 4, Dec. 2010, pp. 332-41, https://doi.org/10.1080/15027570.2010.536402. Accessed Sept. 2024. [] []
  8. Rathour, Mansi. “Autonomous Weapons and Just War Theory.” International Philosophical Quarterly, vol. 63, no. 1, Mar. 2023, pp. 57-70, https://doi.org/10.5840/ipq20231114215. Accessed Sept. 2024. [] [] [] []
  9. Rathour, Mansi. “Autonomous Weapons and Just War Theory.” International Philosophical Quarterly, vol. 63, no. 1, Mar. 2023, pp. 57-70, https://doi.org/10.5840/ipq20231114215. Accessed Sept. 2024. []
  10. Director, Operational Test and Evaluation. “Active Protection Systems (APS) Program.” DOT&E FY 2018 Annual Report, nos. Army Programs, Dec. 2018, pp. 63-66, www.dote.osd.mil/Portals/97/pub/reports/FY2018/army/2018aps.pdf?ver=2019-08-21-155806-557. Accessed Feb. 2025. [] [] []
  11. Vornik, Oleg. “Why a hard-kill strategy doesn’t work against combat drones.” Australian Strategic Policy Institute, 13 Sept. 2023, www.aspistrategist.org.au/why-a-hard-kill-strategy-doesnt-work-against-combat-drones/. Accessed Feb. 2025. []
  12. Sparrow, Robert. “Killer Robots.” Journal of Applied Philosophy, vol. 24, no. 1, Feb. 2007, pp. 62-77, https://doi.org/10.1111/j.1468-5930.2007.00346.x. Accessed Sept. 2024. [] []
  13. Nyholm, Sven. “The Ethics of Crashes with Self‐driving Cars: A Roadmap, II.” Philosophy Compass, vol. 13, no. 7, 22 May 2018, https://doi.org/10.1111/phc3.12506. Accessed Sept. 2024. [] []

LEAVE A REPLY

Please enter your comment!
Please enter your name here