Unraveling the Layers of Computer-Assisted Surgery through Robotic Precision, Image Segmentation, and Virtual Reality



Computer-assisted surgery is becoming more prominent in the medical industry with its many benefits, such as increased precision and shorter recovery time through lessened human error and higher dependency on robots. A major system that will be discussed in the paper is the da Vinci surgical system that allows tiny, controlled movements, superior vision, and greater precision. There are generally two ways in approaching a robotic surgery: transperitoneal and retroperitoneal approach. In addition to the robotic camera, image segmentation systems, such as MRI and CT, provide a labelled 3D diagram of the patient from images taken before the procedure. Thus, the surgeon is able to be more aware of their actions without making further incisions. To simulate these procedures multiple times without a real human, virtual reality systems are implemented. It also provides feedback to the users, enhancing the experience and shortening the learning curve. Lastly, there are debates arousing about who takes the responsibility for an error during the surgery, as well as the unfairness that results from the inability to afford robotic parts in certain parts of the world. CAS requires many steps to fully complete the process, satisfying the patient and the surgeon in addition to the ethical questions rising. In summary, this manuscript brings attention to the ethical questions and behind-the-scenes algorithms of CAS, specifically the reason behind the rise in robotic surgeries and the advantages and disadvantages of CAS.

Keywords: Virtual Reality (VR), Image Segmentation, Computer-Assisted Surgery, Da Vinci Surgical System, Magnetic Resonance Imaging/ Computed Tomography Laparoscopic surgery


Globally, more than 310 million major surgeries are performed each year, of which 1-4% of the patients will pass away, and 5-15% will be readmitted within 30 days1. The major reason why we are able to save more than 95% of the patients comes from the constant searching for the best way to treat the patients. With the automation and technological revolution, CAS is starting to be used widely. They have been in use for over 50 years, and CAS is used in several cases2. Thus, CAS represents the future of the medical field through its technological advancements, which this paper will explain in depth. Understanding CAS allows the patient and the doctor to choose the best options they are given when in need of surgery by understanding the options’ advantages and downsides. This manuscript provides an overview of this newly rising system, including the training of the doctors, the application in medical cases, and its ethical suggestions.

In times of technological development, computer-assisted surgeries are becoming more widespread in the medical world. Compared to traditional open or laparoscopic nephrectomy, robotic surgeries have been found to have lower rate of perioperative complications, less blood loss, shorter hospital stay, and better renal function preservation than the other two. Specifically, robotic partial nephrectomy (RAPN) resulted in 149 ml of blood loss, while open partial nephrectomy (OPN) resulted in 361 ml. Additionally, RAPN had less need for opioids, with 16% of the patients requiring the use of opioids, and OPN with 46%. No significant difference in postoperative complications between the two types of surgeries were found3. So, many patients are choosing to opt for this method despite the higher cost due to the robotic parts.

One of the newer systems in the computer-assisted surgery industry is the da Vinci surgical system. Through this system, surgeons have access to real time ultrasound findings, high-definition display, and imaging results, leading to increased precision and awareness of placement. The system utilizes minimally invasive approaches, resulting in less complications in addition to several benefits–performs complex procedures more easily than conventional laparoscopic surgeries, has smaller robotic arms for increased efficiency, and displays real time ultrasound findings with imaging results. With robotic instruments used as an extension of the surgeon’s arm for higher precision, robotic surgery allows for better control and decreased error4.

With the popularity of robotic surgery increasing, many unique cases utilized the system, including those who required high levels of precision. In one case, a 34-year-old female patient was admitted with a 11-year history of a growing hamartoma on the right kidney. A hamartoma is an abnormal growth of cells that are structurally different from the main cells, which places pressure on nearby structures. A contrast-enhanced computed tomography (CECT) reveals that the tumor had grown from the hilum to the lower part of the right kidney, wrapping around the right hilum. Bilateral renal CT angiography revealed that the middle of and lower branches of the right renal artery were involved.

After her admission, she went through a robotic retroperitoneal laparoscopic partial nephrectomy (RP LPN). Da Vinci instruments were used throughout the surgery, such as the use of bulldog clamp to clamp the renal artery for separation of the giant renal angiomyolipoma (AML), and an electric scalpel used to resect the AML mass. Lastly, the 2-0 V-Loc absorbable wound closure device tightened the defect, which resulted in the postoperative hemoglobin level measured to be in normal range5.  The assistance of the robotic surgical system and preoperative imaging evaluation took a big role in increasing efficiency and awareness of the patient’s situation.

This research project will provide an overview of the ways computer-assisted surgeries (CAS) utilize robotic surgical systems and image segmentation to treat tumors. Through different lenses in viewing CAS, this paper will uniquely review the literature of robotic surgery, image segmentation, virtual reality, their respective algorithms, and the ethical discussions. Thus, the paper will add to the lack of connection between systems and describe the surgical process to the field of research. Furthering the understanding of the different medical systems allows patients and doctors to choose the best option they have, and this manuscript specifically describes the CAS and its different types. This adds to the previously existing research field, which does not describe the entire process of CAS and the respective algorithm for each type. 


This research paper provides the current understanding of computer-assisted surgeries, namely robotic surgery, image segmentation, and virtual reality through a systematic review consisting of 20 articles. Used databases for the articles used include Google Scholar and National Library of Medicine. The search terms were “robotic surgery, “robotic partial nephrectomy and laparoscopic nephrectomy,” “da Vinci surgical system,” and “Computer-assisted robotic renal surgery” for the introduction, literature review, and robotic surgery sections. Further, the search terms included “MRI and CT,” “region growing algorithm,” “virtual reality medical,” and “ethics of robotic surgeries” for the rest of the sections. The criteria for the selected articles were based on the specificity of the case studies, algorithms used, and the statistics mentioned. In other words, articles that generally mentioned the wanted subject were disregarded in the research process. Specific examples allow more diverse perspectives and different cases applicable to the CAS field, allowing deeper understanding.

Literature Review

Robotic surgery has not been around for long. In the 1920s, the concept of modern robotics began developing, with the word “robot” originating from the Czech word “robota,” which means “forced labor.” Robotic surgery was first performed in the 1980s with the first surgical robot, PUMA 560, used in a brain biopsy procedure. These robotic surgeries were implemented to reduce movements due to hand tremors. By the 90s, engineers had developed three systems that utilized surgical robots and traditional laparoscopic technology–da Vinci Surgical System, the AESOP, and the Zeus surgical systems6.

Virtual reality (VR) is essential in robotic surgery to decrease the learning curve of the surgeons by simulating the surgery without involving a real-life patient. The system provides feedback at the end of the sessions by comparing the performance with that of their peers, further facilitating learning for the students. In addition, the system incorporated integrated multiple-choice questions, which were perceived by 97% of the students to lead to higher levels of understanding7. In a study consisting of twenty people, a study group that received haptic feedback and 3D vision took 69 less minutes to complete the training course, had faster learning in 3 of the 4 tasks given, and had significantly lower number of attempts to reach proficiency8. Note that due to the small scope of the study, there is room for bias where the trend the study saw may be specific to those twenty people.

This haptic feedback and 3D vision are major examples of the recent breakthroughs of the CAS field. Another recent development, wired gloves, were used to measure hand position and provide feedback to the user through a series of optical goniometer flex sensors, piezoceramic benders, and low-frequency magnetic fields. This ultimately resulted in the VR system that was originally connected to the idea of “telepresence.”

The people preferred robotic surgery for its elimination of hand tremor, human errors, and for its amount of precision and aid to the surgeons. With its shorter recovery time, fewer complications, and shorter hospital stays, robotic surgery remains greatly popular with over 11 million robotic surgeries performed worldwide with Intuitive Surgical da Vinci robots in 20239. However, the rest are expressing concern for robotic errors, and ethical crossroads.  These systems are divided into categories, supporting different parts of the procedure, which this paper will discuss. While the previously existing research does not describe the entire procedure but rather focused on individual parts, this paper aims to tie the parts together.

Types of CAS Systems

Robotic surgery

There are widely two different approaches to robotic surgeries: transperitoneal, and retroperitoneal. The section will use the example of nephrectomy.

The transperitoneal approach is generally used for anterior or lateral lesions. There are different port strategies based on the position of the robotic camera. With a medial placement of the camera, the robotic ports are arranged in a wide V configuration towards the renal hilum, providing a view similar to the conventional laparoscopy. With a lateral camera position, the camera is placed lateral to the robotic arms, reducing arm collisions and increasing space for the assistant and fourth robotic arm. The fourth robotic arm is used as an extension of the surgeon’s hand. It is used to retract and position the kidney by being placed under the ureter and lifted to place the kidney on stretch. This allows two-handed renal dissection of the renal vessels10.

A retroperitoneal approach generally is used for posterior, posteromedial, or posterolateral lesions. In nephrectomy, an oblique 1.5 cm incision is directly made between the 12th rib and the iliac crest to access the retroperitoneum, which is extended to the fascia after gently dissecting the muscle fibers. A PDB balloon dissector dilates the retroperitoneal space, and two robotic instruments and a 12mm trocar for the camera is placed finger guided on the iliac crest along the mid-axillary line. An 8mm robotic trocar is then positioned along the psoas muscle below the 12th rib-vertebra angle. A 12mm laparoscopic trocar is positioned along the psoas muscle behind the iliac crest. Then 12mm Hasson trocar is finally positioned and the working space is created by CO2 inflation. The second 8mm robotic trocar is positioned on the anterior axillary line on the same axis as the umbilicus. After all the trocars are placed, the da Vinci Si robotic surgical system is positioned and docked on 30° patient’s anterior cephalad position. The paranephric fat is dissected using robotic scissors and grasper, increasing the workspace. The kidney is retracted anteriorly, and dissection proceeds along the psoas muscle with robotic assistance until pulsations in the retroperitoneal fat are identified, meaning the underlying renal vessels. Then, the tumor is exposed, and the hilum is identified. After the tumor is resected and the renal hilum is dissected, the renal procedure is completed11. Thus, compared to the transperitoneal approach, the retroperitoneal approach allows quicker access to the hilum.

Claims in favor of this transperitoneal approach is the larger working space, which allows wider angulation and maneuverability with laparoscopic instruments, and the more accustomed orientation by familiar anatomic landmarks, but requires bowel mobilization to expose the kidney. Claims against the transperitoneal approach include how the camera is closer to the kidney, resulting in a less global view. Additionally, there may be more arm collisions as well as be more difficult to use with patients with wide hips12. Further, potential benefits for the retroperitoneal approach include shorter time to renal hilar control by directly accessing the renal hilum, avoidance of peritoneum, provide an early mobilization and a short hospitalization of the patient, and improved access to posterior renal tumors. From September 2010 to December 2015, 81 RAPN (Robot Assisted Partial Nephrectomy) were performed, including 30 cases where the artery was clamped, and the hilum was identified. In those 30 cases, no patients experienced complications during the procedure and no open conversion was needed13.        

This paper provides insight to the transperitoneal and retroperitoneal approach in terms of nephrectomy, portraying an example to better illustrate the similarities and differences between the two approaches and their respective benefits and downsides. Depending on the patient, the doctors are able to choose which approach to take. As mentioned before, a transperitoneal approach would not be ideal for patients with wide hips.

Recent technology that illustrates this is the da Vinci system. For example, the transperitoneal SP-RARP (single port-robotic-assisted radical prostatectomy) technique imposes technical modifications to approach new camera settings and instrument modifications. The new SP robot incorporates a single port that consists of a flexible camera and three biarticulated arms, which minimizes the number of incisions required to access the surgical site. The transperitoneal approach allows full access to the surgical site, allowing access to multiple quadrants. In short terms, the positions of the patients, trocar, and the camera is modified14. Compared to traditional transperitoneal or retroperitoneal approaches, using robotic surgery through systems like da Vinci allows further efficiency and easier access to the viewing of the patient and higher precision. This paper provides insight to the two approaches in terms of robotic surgery, pointing to the future of the medical field.

Image Segmentation

Image guided systems focus on increasing the awareness of the surgeon of where each body part is placed, resulting in more precision and minimizing human error. Prior to the surgery, the patient undergoes computed tomography (CT), or magnetic resonance imaging (MRI) to generate a 3D computer replica of the patient. These photos taken allow a look into the patient’s body without dissecting the patient or causing excessive dissections to see where each part is.

While normal x-rays provide a 2D imaging, which gives little information regarding tissues other than bones, image segmentation systems produce a 3D diagram with each structure labelled.  CT and MRI provide a stack of virtual slices of the body as if it has been cut into hundreds of thin sections, which forms a 3D model together. The patient lies in the cylindrical magnet, which produces a constant magnetic field, causing certain proton tops to spin. A second field applies a pulse that tilts the tops into a different position while spinning. When the pulse is gone, the tops go back to their original position, releasing energy. Different tissues emit different amounts of energy, and the computer uses the measurements to determine the position of each tissue and build a model with more energy resulting in more brightness in the MRI scan.

Surgeons usually favor MRI over CT because it demonstrates anatomy better and is more sensitive to diseased tissues. It also uses measurements of the body’s responses to magnetic fields rather than ionizing radiation, which has enough energy to pose risks by damaging tissues and DNAs15.

A case of a male 48 years of age demonstrates the MRI used in real life applications. The patient faced fever, tiredness, and diarrhea two days post-vaccination and was referred to the hospital 5 days after vaccination. Through a cardiac MRI, T2 high signal intensity and late gadolinium enhancement (LGE) were observed in the mid-wall of basal inferior and sub-epicardial wall of mid-septum and infero-septum left ventricular. LGE is the most established technique for detecting myocardial damage. Thus, cardiac MRI (cMRI) is useful for diagnosis of any form of myocarditis and also provides important information on prognoses16.

However, there are limitations as any other technology would. First, there are gray areas between tissues that the computer may have trouble distinguishing between as they are very similar. This limitation can be overcome through stronger magnetic fields used, as that will differentiate between the tissues more clearly and provide further details. However, a downside is that there will or may be safety issues concerning this. Another solution is to use low-field MRI, which is a type of MRI that uses lower strength magnets, which produces poorer quality images. Then, an AI program, like deep learning, will be able to enhance the quality of the images to be used in medical use. A study, using super-resolution (SR) image reconstruction, obtained good quality images from low-field MRI systems. In the study, the complete architecture consisted of 1,910,689 trainable parameters. 29,059 images were used for training the model, and 17,292 were for validation17.

Second, one part of the scanned area may receive a different amount of energy than another part, leading to the wrong measurement of brightness in that area. Both would result in an incorrect scan, which may make the surgeons make the wrong decisions and have low awareness of the body parts. To overcome the limitations of image segmentation systems, it is crucial for the doctors and healthcare workers to be trained well to prevent potential errors in the process of MRI or CT. This prevents human errors, such as exerting different amount of energy in different scanned area as mentioned before. The specific solution is through enhanced training, as described in the introduction section, with virtual reality simulations. These simulations allow the training process to feel as if the student is in the real surgical situation, resulting in quicker and more efficient learning. In fact, in the 1970s, VR-based simulation training in the aviator industry results in a 50% reduction in human error-related airplane crashes18. This will be discussed in further detail in the virtual reality section.

Segmentation Workflow

The algorithm segments an organ based on a set of priori information, which includes the estimate location, histogram of the organ’s intensities, and a probabilistic atlas of the organ. In the process of organ segmentation, the organ bounding box must be found. The organ bounding box describes the location of the organ represented by a rectangle. Methods for finding the bounding box can automatically detect the localization of organs within CT scans.

For bone segmentation, the main approach is to estimate the intensity range of the bone and apply the region-growing technique to find the bone. The first step is to find the abdominal region using Otsu thresholding, then the next is to model the intensity range of bones ([Tlow, Thigh]) and a seed point is automatically selected. Lastly, the region-growing algorithm is implemented to segment bone, and the morphological operators are used to refine the results.

The abdominal region is extracted using the Otsu thresholding method. Since the CT image is in black and white, there are thresholds for classes for colors in terms of brightness. For instance, if a region is brighter than a certain shade, it would be assigned with its respective class and vice versa. The amount by which the shade is similar to light or dark class is called the intra class variance. In terms of the kidney, the system determines the organ bounding box through this method where the non-kidney pixels are as similar, and the non-kidney and kidney cells are the most different. We take the 3rd slice as it has sufficient kidney pixels to calculate the threshold of the brightness, which means the Tlow and Thigh for bone segmentation.

An iterative search approach is applied to find the Tlow. Using the Gaussian model, we calculate the intensity distribution of the selected slice. Then, the parameters of the Gaussian distribution (G(\mu;\sigma)) include \mu, which depicts the center of the bell curve, and \sigma, which is the standard deviation, or how narrow the tip of the bell curve is. From the initial threshold value Tlow = \mu, we iteratively increase. Tlow by (0.2 * \sigma) until the ratio of the segmented bone volume to the whole volume becomes less than the fixed value. In other words, we start on the center of the bell curve and move to the right until a certain percentage of the model is bone. Note that a lower threshold value for the bone segmentation means the lessening of bone regions and vice versa19.

Next, a seed point is automatically selected after the input image is thresholded. We first define the central zone, whose area is 25% of the abdominal region. Then, we generate the thresholded image by thresholding the third slice of CT volume. We obtain the largest segmented object in the central zone, whose central point will be the seed point. If this point is in empty space, the nearest point will become the seed point. Then, we implement the seed region-growing segmentation to segment the bone20. The common procedure of this segmentation is to compare one pixel with its neighbors. If a condition is satisfied, that pixel is added to the region as one or more of its neighbors. The condition, called similarity criterion, is signification and are affected by noise, which makes the algorithm susceptible to errors. Each region is iteratively grown by comparison of all unallocated neighboring pixels to the regions. The measure of similarity is defined as the difference between a pixel’s intensity value and the region’s mean, and the pixel with the smallest difference is assigned to the respective region. All pixels are assigned to a region in this manner. Figure 1 shows the pseudocode for this algorithm21, and figure 2 shows a graphical representation22. When all pixels are assigned, the segmentation is complete.

Figure 1: Pseudocode for seed region-growing segmentation algorithm
Figure 2: Graphical representation of the seed region-growing segmentation algorithm
a) Choose the starting seed point b) Growing region after a few iterations

Virtual Reality Simulation

Virtual reality (VR) is the use of software to create a simulation of real life. Users put on head-mounted display (HMD), which allows the users to see and engage with the simulated world, including virtual environment and characters. This technique is utilized for medical education, where students practice through VR without violation of ethics nor limitations of resources.

One way VR is set up is using the 360-video method, which is a method of filming in 360 degrees to complete a picture of the environment. A camera that can film in every direction at once is used, and once the recording is finished, they are viewed through the VR headset. However, since the video is purely a linear recording, the users cannot interact with the environment, making 360-video a more suitable method for the purpose of learning direct information rather than simulating or experimenting.

Another type of VR is the interactive VR, which involves a totally immersive, dynamic, adaptive, and interactive world. The learner would go through the step-by-step process that would be followed in a real-life emergency, from diagnosis to patient observations and realistic conversations. This makes the learner focus on critical thinking, decision making, and clinical reasoning. At the end of the session, they would receive feedback about their technical and non-technical performance, which decreases the learning time length and facilitates giving and providing feedbacks by pointing to certain actions done in the procedure23.

VR simulation is effective in reducing human errors through realistic surgical simulations. A study showed that VR clinical skills training resulted in a 40% reduction in medical error rates. This is because the simulation highlighted places and situations where human error may occur, allowing students to anticipate them in real life situations24. Thus, virtual reality is crucial for decreasing errors and thus help the patients in the best way and save the greatest number of lives in the medical industry.

In addition, VR simulations are effective for mental health patients. A study targeted patients who have post-stroke depression (PSD) showed significant reductions in systolic blood pressure (p \textless 0.01), diastolic blood pressure (p \textless 0.01), and blood pressure (p \textless 0.01) for patients that underwent Virtual Reality Rehabilitation Landscape (VRTL), a program aimed for rehabilitation through VR simulations, compared to the control group25. As part of the recent breakthroughs of the system, VRTL provides personalized programs through creating customized scenarios that align with the patient’s goal and challenges. There are three main benefits. First, this simulation distracts the patient from the pain or discomfort they are facing. Second, rehabilitation becomes a more enjoyable experience through active participation and real-life applications, enhancing the effectiveness of the program. Lastly, there are home-based virtual reality options, allowing patients that face a geographical limitation to undergo the experience.

Virtual Reality Workflow

The most challenging part of VR is to make it realistic through contemporary hardware performance. Computational power can be used for two different directions. The first is to approximate the physics of the interaction between light and objects, represented by simplified yet well-approximated models. The second is to utilize the natural abilities of the human visual system to understand a 3D world. The human visual system can understand depth, shape, shading, stereo vision, small object movements, and blurring. Computer graphics algorithms use these to make the VR environment more realistic.

The general requirement for medical simulation is to have a real-time output of at least 10-15 frames with the delay between user input and system response less than 100 ms, ideally less than 10 ms. In order to build a realistic model, certain polygons must be attached to each other for a smooth texture. For some cases, 10 to 15 thousand visible polygons provide sufficient realism, while others require more than 50 thousand.

There are many ways to represent polygons: 1) wire-frame representation–the polygon edges are displayed. 2) polygonal representation– object surfaces are represented as a set of flat 2D polygons that combine to form an approximation to the surface. Each polygon is uniquely defined by its list of vertex coordinates and geometric manipulations that need to be applied to the vertices rather than to each facet pixel. An example is the Marching Cubes algorithm that is used to extract a 3D surface from volumetric CT data. 3) bicubic parametric patches–the surface is approximated by 2D piecewise bicubic parametric “patches.” This is a more accurate model, but an edge list must also be maintained. 4) constructive solid geometry–the graphic object is expressed as a set of “primitive” bodies, such as spheres or cubes, combined by Boolean relationships. 5) volumetric representation–The object is represented as a 3D array of voxels. This method uses the entire volume rather than just the surfaces, which is useful to represent 3D medical data such as CT or MRI.

In order to further the realism, shading is required. Shading results from the interaction of a light source with the object being shaded, and this takes in the following parameters: viewer position, position of surface point, surface characteristics, light-source position, light-source characteristics, etc. One method, Gouraud shading, provides a continuous and smooth surface by evaluating the illumination equation at each polygon vertex and linearly interpolating those values to produce individual pixel colors. The downsides are that it is not only expensive but also cannot represent sharp specular highlights within facet interiors. Another method, Phong shading, preserves specular highlights by evaluating the illumination model at each pixel. Because this process is computationally intense, both the Gouraud and Phong shading are used for different parts.

While these shading algorithms provide a more “plastic-like” result, texture mapping provides a more realistic representation. The process is similar to “gluing” wallpaper onto the object, then a second transformation maps the texture from the surface to the screen. This is used for medical applications such as the display of 2-D ultrasound acquisition on a plane.

Computational hardware is also a major contributor to the advancement of VR/AR in medicine. It simplifies writing applications and allows them to be run on newer generations of hardware without rewriting the code. API is used to let users interact with the system or between multiple programs through sets of computer algorithms. It connects the hardware with the program; the programmer types code into the API, which connects to the hardware. Therefore, with there are multiple complex algorithms interconnected to produce one realistic object, which builds up to create a realistic environment for virtual reality users26.

Ethics in Robotic Surgery

Inequality in Different Regions

Besides the technical benefit, there are ethical debates arising from the topic of computer-assisted surgery. From the nature of this surgery, there are many expensive instruments used throughout the system, such as videoconferencing applications and multiple ISDN lines. There will be many clinical centers in underdeveloped areas or in developing countries to generally not have access to these robots. Thus, this will result in patient addressability imbalance and shortage in specialists in underequipped clinics. Many will have to travel to the cities or more developed regions to receive the equal treatment as those living ten minutes away.

Also, the costs of the robotic surgeries are exorbitant. For instance, the robotic surgery in India can cost twice as much as conventional surgery27. This discourages patients from undergoing robotic surgery and will remain out of reach for the global lower class for a long time.

Not only patients, but the surgeons will migrate from these undeveloped regions to the well-equipped centers with better training, affordable new technology, and the financial motive for the migration. This will inevitably create many difficulties in the global healthcare system, especially in the underdeveloped areas. With specialists leaving, and the few centers unavailable to afford the necessary and more efficient surgical robots, the lives of many will potentially be in danger with addition to the migration of residents for better conditions with the exception to the elderly. These problems are becoming prominent in East Europe already, with increasing numbers of specialists migrating to West Europe28.

Robot Errors and Responsibility

Another question that arises is: who is responsible for robot errors? Robot responsibility depends on the degree of autonomy that robot has. The various degrees of autonomy of machines ranges from kinetic autonomy (capability to move a part from one’s structure), cognitive autonomy (recognizing, processing, and manipulating information), classificatory autonomy, first-order institutional autonomy (selecting which institution to take part of to solve trust problems), and second-order institutional autonomy (innovating institutionally.)

In the status quo, the owners and designers are held accountable for robotic errors. However, there is no legislation for surgical robots, especially autonomous robotic surgery. In the case where these legislations are made, it will become of critical importance to distinguish the acts that are partially or are not attributed to a human person. This issue is further complicated by the idea of punishment. If a human person is found accountable for an error, they will naturally receive a punishment. However, the same does not go for robots. How would it be possible to “punish” a robot for an error? From the utilitarian point of view, the robots would receive punishments to produce positive consequences, but this is senseless as robots are programmed not to make errors. From the retributive theorical view, robots do not suffer from punishments, hence this way is equally useless as the utilitarian. Thus, there is no “right” form of punishment that can take place; the errors would need to be minimized through a thorough completion of the procedures. For example, a robot would have to successfully complete a 3D virtual simulation before performing on a real-life patient.

A real-life example of this issue of responsibility was shown through the Volkswagen (VW) emission fraud case in 2015. The US Environmental Protection Agency (EPA) discovered that diesel engines sold by VW contained a defeat engine, that is a device that makes the requirement element of the emissions control system inoperative. When the car was being tested, the defeat device software turned on the system, while it was turned off anytime else possibly to save fuel or to improve the car’s performance. The emissions from the car increased above legal limits, up to 40 times the threshold. A debate broke out, divided between whether it was the engineer’s responsibility, or the defeat device system’s responsibility. While allegedly this issue was engineers’ intentions, imagine a futuristic scenario where VW becomes an advanced AI that is capable of designing software. If that AI is given the goal of passing the EPA test while minimizing the cost, the car will pass all the tests but design its own defeat device that no human will be aware of. In that case, it will be extremely harder to draw the line of who is held responsible29.

Culture and Social Issues

One major problem that is brought up is the elimination of personal connection between the patient and the surgeon. The patient will rely on the doctor for their lives and health, which encompasses mutual trust and respect–values that a human cannot easily hold for a robot. Instead of this connection, some authors even argue that the patients are viewed as mere “beneficiary” or even “data source.” On the flip side, surgeons do not “feel” as if they are performing the surgery and will not be as into the moment as without the robots. As surgery is considered a highly specialized and training-requiring process, this challenge feels significantly bigger for the surgeons themselves. We need to note that as these difficulties for surgeons increase, patients may lose the trust they had initially in fear of an error, connecting back to the importance of personal connection.


Robotic surgery is becoming more widespread, especially in more developed regions. This minimally invasive surgery provides increased reliance as well as more precision, which makes the patients more willing to take part. Image segmentation, prominently MRI and CT, is essential as it provides a diagram of the patient’s body, allowing better control. The virtual reality system allows unlimited amounts of simulations of the procedure without the use of a real-life patient, showing the more ethical side of the system. The surgeons receive incorporated feedback at the end of the experience, facilitating and shortening the learning curve. On the other hand, some disadvantages of computer-assisted surgery include the cost and the ethical debates surrounding the topic. Not many places around the world are capable of implementing this new system, due to the cost and maintenance, leading to unfair access to resources between hospitals. Additionally, there is a debate on which party holds the responsibility in the case of a robotic error.


  1. G. P. Dobson. Trauma of major surgery: A global problem that is not going away. International journal of surgery (London, England), 81, 47–54 (2020). []
  2. E. I. George., T. C. Brand, A. LaPorta, J. Marescaux, & R. M. Satava. Origins of Robotic Surgery: From Skepticism to Standard of Care. JSLS : Journal of the Society of Laparoendoscopic Surgeons, 22(4), e2018.00039. []
  3. K. Kowalewski., M. Neuberger, M. a. S. Abate, M. Kirchner, M., C. M. Haney, F. Siegel, N. Westhoff, Michel, P. Honeck, P. Nuhn, & M. C. Kriegmair, Randomized Controlled feasibility trial of robot-assisted versus conventional open partial nephrectomy: the ROBOCOP II study. European Urology Oncology. (2023). []
  4. E. I. George., T. C. Brand, A. LaPorta, J. Marescaux, & R. M. Satava. Origins of Robotic Surgery: From Skepticism to Standard of Care. JSLS : Journal of the Society of Laparoendoscopic Surgeons, 22(4), e2018.00039. []
  5. S. H. Luo, Q. S. Zeng, J. X. Chen, B. Huang, Z. R. Wang, W. J. Li, Y. Yang, & L. W. Chen, Successful robot-assisted partial nephrectomy for giant renal hilum angiomyolipoma through the retroperitoneal approach: A case report. World journal of clinical cases, 10(12), 3886–3892. (2022). []
  6. S. Clinic, The History of Robot-Assisted Surgery | The Surgical Clinic. The Surgical Clinic. (n.d.). []
  7. F. Marchegiani, L. Siragusa, A. Zadoroznyj, V. Laterza, O. Mangana, C. A. Schena, M. Ammendola, R. Memeo, P. P. Bianchi, G. Spinoglio, P. Gavriilidis, & N. de’Angelis, New Robotic Platforms in General Surgery: What’s the Current Clinical Scenario?. Medicina (Kaunas, Lithuania), 59(7), 1264. (2023). []
  8. G. Makransky, R. E. Mayer, A. Nøremølle, A. L. Córdoba, J. Wandall, J., & M. Bonde, Investigating the feasibility of using assessment and explanatory feedback in desktop virtual reality simulations. Educational Technology Research and Development, 68(1), 293–317. (2019). []
  9. K.  Hagelsteen, A. Langegård, A. Lantz, M. Ekelund, M. Anderberg, & A. Bergenfelz, Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision. Minimally Invasive Therapy & Allied Technologies, 26(5), 269–277. (2017). []
  10. F. Petros, & C. G. Rogers, Computer-assisted robotic renal surgery. Therapeutic Advances in Urology. (2010). []
  11. A. Porreca, D. D’Agostino, D. Dente, M. Dandrea, A. Salvaggio, E. Cappa, A. Zuccala, A. Del Rosso, F. Chessa, D. Romagnoli, F. Mengoni, M. Borghesi, & R. Schiavina, Retroperitoneal approach for robot-assisted partial nephrectomy: technique and early outcomes. International braz j urol : official journal of the Brazilian Society of Urology, 44(1), 63–68. (2018). []
  12. Radiation health effects | US EPA. US EPA. (2023). []
  13. C. Dong, Y. Chen, A. H. Foruzan, L. Lin, X. Han, T. Tateyama, X. Wu, G. Xu, & H. Jiang, Segmentation of liver and spleen based on computational anatomy models. Computers in Biology and Medicine, 67, 146–160. (2015b). []
  14. C. Dong, Y. Chen, A. H. Foruzan, L. Lin, X. Han, T. Tateyama, X. Wu, G. Xu, & H. Jiang, Segmentation of liver and spleen based on computational anatomy models. Computers in Biology and Medicine, 67, 146–160. (2015b). []
  15. F. G. Petros, & C. G. Rogers, Computer-assisted robotic renal surgery. Therapeutic Advances in Urology, 2(3), 127–132. (2010). []
  16. A. Alaa (n.d.). Week 6: Region Growing and Clustering Segmentation). Tutorials for SBME Students. (2019). []
  17. Z. Soferman, D. Blythe, & N. W. John Advanced graphics behind medical virtual reality: evolution of algorithms, hardware, and software interfaces. Proceedings of the IEEE, 86(3), 531–554. (1998). []
  18. Pottle, J. Virtual reality and the transformation of medical education. Future Healthcare Journal, 6(3), 181–185. (2019). []
  19. M. C. Moschovas, I. Brady, J. Noël, M. A. Zeinab, A. Kaviani, A., J. Kaouk, S. Crivellaro, J. Joseph, A. Mottrie, & V. Patel, Contemporary techniques of da Vinci SP radical prostatectomy: multicentric collaboration and expert opinion. International Braz J Urol, 48(4), 696–705. (2022). []
  20. K. V. Prasad, “Book Rvw: Computer and Robot Vision. By Robert M. Haralick and Linda G. Shapiro.” Journal of Electronic Imaging, vol. 3, no. 02, Apr. 1994, p. 203. (1994). []
  21. K. Watanabe, T. Ashikaga, Y. Maejima, S. Tao, M. Terui, T. Kishigami, M. Kaneko, R. Nakajima, S. Okata, T. Lee, T. Horie, M. Nagase, G. Nitta, R. Miyazaki, S. Nagamine, Y. Nagata, T. Nozato, M. Goya, & T. Sasano, Case Report: Importance of MRI examination in the diagnosis and evaluation of COVID-19 MRNA vaccination Induced myocarditis: Our experience and literature review. Frontiers in Cardiovascular Medicine, 9. (2022). []
  22. F. Graur, M. Frunz?, R. Elisei, L. Furcea, L. Scurtu, C. Radu, A. Szilaghy, H. C. Neagos, A. Muresan, & L. Vlad, Ethics in Robotic surgery and Telemedicine. In Springer eBooks (pp. 457–465). (2010). []
  23. X. Zhang, X. Li, & Y. Feng A medical image segmentation algorithm based on bi-directional region growing. Optik, 126(20), 2398–2404. (2015). []
  24. M. De Leeuw Den Bouter, G. Ippolito, T. O’Reilly, R. Remis, M. B. Van Gijzen & A.  Webb, Deep learning-based single image super-resolution for low-field MR brain images. Scientific Reports, 12(1). (2022). []
  25. J. Pottle, Virtual reality and the transformation of medical education. Future healthcare journal, 6(3), 181–185. (2019). []
  26. G. Kennedy, S. Pedram, & S. Sanzone, Improving safety outcomes through medical error reduction via virtual reality-based clinical skills training. Safety Science, 165, 106200. (2023). []
  27. Y. Li, Q. Zhang & X. Fang, Research on patient-centered design for post-stroke depression patients based on SEM and comprehensive evaluation. Frontiers in Public Health, 11. (2023). []
  28. A. Saniotis, & M. Henneberg, Neurosurgical robots and ethical challenges to medicine. Ethics in Science and Environmental Politics, 21, 25–30. (2021). []
  29. D. G. Johnson, & M. Verdicchio, AI, agency and responsibility: the VW fraud case and beyond. AI & SOCIETY, 34(3), 639–647. (2018). []


Please enter your comment!
Please enter your name here