Home Forums Silicon Valley Study Tour – August 2019 Novara Silicon Valley 2019

244 replies, 25 voices Last updated by Paolo Marenco 1 year, 8 months ago
  • Benedetta

    Hi everyone and thank you Luca for the topic!

    I personally didn’t know about this test, if you want to call it like this, and I found it a bit strange to do, because even if it’s made in a “game” form, with illustrated characters and not sentences for example, it concerns a serious topic.

    About my opinion about it, I find it important to collect data that could help self-driving cars/AI make the “right” choice, even though I think we can all agree that morality is not necessarily 100% objective, these things are not always black or white.

    Questions were often considering not only the number of people that could be injured, but also their age, the driver’s responsibility or even if the one crossing the street was a thief. But are we sure a human driver’s mind works like this?

    In a danger scenario, our first thought – and not even the second I hope – wouldn’t be who to harm, but rather if there is a possibility not to harm anyone. It’s true that an incredibly high percentage of accidents today are caused by human error, but these past few years we just started seeing how autopilots are not infallible either.

    Moreover, I firmly think an artificial mind could never be a valid substitute for a human one, at least with the technology we have today, considering we have skills such as commonsense, reasoning or communicating, intrinsically human, that are more than difficult to replicate!

    Or maybe it’s too soon to even imagine what can happen if an AI system becomes better than humans at all tasks, not just basic/technical ones (here, they already won): my own view is that we will get progressively closer to this without reaching it, but I’m curious to see what are your opinions instead!

    Stefano Garavaglia

    Good evening everyone,

    Really interesting topic, interesting and stimulating. I personally found it very able in making us contradict ourselves (sure I did myself :), because after every decision, based on a specific rule (in our mind) such as “kill the elder“, “kill the red-crossing (wo)men” or “go straight on, don’t take action“, the following dilemma solution seemed non that good  if taken on the same rule. The first decision maybe is to take action and intervent to avoid the death of three people, but killing the only passenger. And if the following solution would be to AVOID intervention “to save” three lives letting the passenger die? I mean, I am going on with the rule of saving more lives as possible, but I’m not following the avoiding intervention rule.

    As this came to my mind, I asked myself: which rule is the most “right/moral” and should therefore be the powerful one? And the second one? In other words, which order of rules suits best to this kind of decision? Census, gender, age, social behaviour: how much does any of these weights?

    I think the solution is that this “game” is everything but moral, because you can’t choose whom is worthy of death: this is mere immorality. There is not a fair set of rules to be installed on a self-driving car, and of course is not possible to employ an average. Furthermore, I found of primary importance the difference between a death and a murder. Is it fair to KILL a (wo)man to avoid the DEATH of a pair of them? In the first case, the car is killing a human; in the second one is letting them die, that is at odds with murder.

    Instead of doubts such as the capability of a car to recognize if you are rich or poor, good or evil (how do they do?), or the hypothetical choice of a real hand-on-wheel driver, I find more stimulating discussing about the following statement.

    The lesser of two evils is always the best solution. I (hypothetical selfish person) am travelling in a self driving car: do I wanna die to save two lives? Of course one is less than a pair, but I don’s see a real reason why I should buy and travel in a car that, beyond taking off the driving pleasure, is ready to let me die, even if “fair and moral”.

    Have a nice evening,



    Hello Guys!

    Talking about this platform for public participation in and discussion of the human perspective on machine-made moral decisions Offer your perspective on which moral dilemma outcomes you find acceptable for a self-driving car, create your own dilemma scenarios, and discuss them with others.

    People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules.

    We humans make some mistakes very often. Not only a small mistake like leaving the keys in the fridge, then it is a deadly one like leaving the oven on all day. We tend to be reckless, forgetful, overconfident, easily distracted…and more.

    Computers, on the other hand, have purely pragmatic minds. They sense data and react in programmed, calculated ways and solutions.

    One of the primary goals for the inventors is provoking debate among the public in my opinion, and especially dialogue among users, manufacturers, insurers, and transport authorities.

    The Moral Machine is an interesting test, especially if we consider the regulatory debate on the laws that will govern the movement of self-driving cars.

    In a recent poll conducted by the American Automobile Association’s Foundation for Traffic Safety, 78% of respondents said they were afraid of getting into a driverless vehicle, while a survey conducted by the insurance giant AIG shows that 41% of participants do not he wanted to share the road with a driverless vehicle.

    The same result is also given by the surveys conducted over the past 2 years by the Massachusetts Institute of Technology (MIT) and the marketing company JD Power and Associates. Although companies can invest in the security of these systems, the fear of consumers and their distrust increases, partly due to the mystification of the issues related to artificial intelligence, partly because the same professionals seem not to have convincing answers unique. .


    Hi guys

    I did the test and to be honest neither I knew about it. In my opinion some of the answers were quite “easy”, while some were way less obvious. I think this test is suitable for the period of Technological growth which we are living in, in order to solve this ethical problem which is important to discuss about. Moreover it is useful because it allows researchers to collect several information and opinions from different cultures and nationalities. However I have to say I do agree with Stefano, why should I buy an automatic vehicle knowing that in some situations I’d be the one who dies? I’m more inclined to let destiny make its course.Moreover, I support automatic driving only when it can correct a human mistake (for example the risk to fall asleep). Unfortunately sometimes mistakes are to be found into the same system which runs the automatic driving, leading to mortal accident (as already happened to Tesla cars). Even in this case I would lose my life for an human error committed by a different person from the driver. I will conclude by saying that putting my safety and life in the hands of an AI doesn’t make me feel comfortable at all.



    • This reply was modified 1 year, 10 months ago by Linda.
    Paolo Marenco

    Hey all!!

    great discussion this year too…Novara rocks!

    Happy to invite you to attend our SVST 2018 Reunion where you can hear about us and the next Silicon Valley  Study Tour.

    We’ll be from Milano, Genova, Verbania…and Novara !

    The invitation is open to you all!



    Good morning guys!

    How are you? I’ve just done this test, it put me in trouble!
    Some questions were clear, but others…

    I suppose that the central theme is the ethical dilemmas, have you heard about these? In many cases I was fought!
    I totally agree with @farinellolinda, because automatic machines are perfect only if could correct human mistakes, the theme is choose the lesser evil, but when there is people’ lives, is not so easy to choose.

    I would extend the speech, how has the technology changed our life?
    I think there are two different points of view, pros and cons, the pros are believe that could exist a machine that drive for us is so incredible, could find a solution for the critical situations, but the cons is that can damage people’ lives too.
    What do you think about?
    Nice weekend!


    • This reply was modified 1 year, 10 months ago by Selene.

    Hi guys,

    This test is made to determinate whether answers are emotionally or rationally driven.

    Of course from the emotionally point of view, nobody would ever like to kill anyone.

    On the rational side instead, the choice is made on the evaluation of the least possible damage.

    I believe that, if this test will be associated to an innovative project like the automatic driven car, the results will be able to tell whether, for this product, injuries can be accepted and the innovation so big to justify collateral damages to people or any other living creature.

    For all vehicles used in the 20th century there have been rational acceptance of all possible accident; the only one not having passed general check war the Zeppelin.

    All innovation have to pay its toll, just think of the space and medical discoveries.

    Have a nice weekend,


    • This reply was modified 1 year, 10 months ago by Giorgio.
    Sara Catto

    Hello everybody,

    I’m apologize for my absence and (in extreme delay) I made the Moral Machine: the test you proposed is very interesting, it allows to debate about unmanned vehicles, especially “no driver cars” and all the laws which have to be implemented in order to regulate this new concept of transportation. In my opinion the law-making process has to take into account several aspects and a test based on questions and two different answers does’nt cover all the aspects in a satisfactory way.

    Computer simply react to inputs according to the way they have been programmed, they don’t have emotions and feelings. Moreover, the scenario in the test can show in real life, but the time we have to give the answer is extremely long compared to the extremely short amount of time we have in reality to react. Do you agree with me.


    Luca Lostumbo

    Hey guys! Hope everything is fine!

    I did the test, honestly i did not know about that and it surprised me a lot!

    The spread of self-driving cars could put us in front of some moral dilemmas. Assuming that self-driving cars represent a quantum leap in car technology, it should be taken for granted that safety should also enjoy this leap in quality. Despite everything, computers are still programmed by human and could run into some error that would tilt the system.

    Personally, however, in the various situations I do not find a link with the differences of gender, age, income or social status: life is a precious and nobody deserves to die because of a this kind of error,  I also have difficulty understanding how the computer can recognize these differences. Speech aside for those who cross the red light, obviously should not be done and any damage caused for this action is only the responsibility of those who cross.

    If technological development will move a car by itself it will have to be able to not be a danger. If there is even 1% chance of the car having problems like that, well I think it’s better to keep moving with human-driven cars. A solution to self driving cars could leave a margin of user use, in this case people: when someone realizes that the car has problems it could take control.

    I’ve never been a supporter of self-driving cars because I like driving 🙂  In my opinion the test could be useful to highlight how the various countries in the world react to these types of problems or to stimulate the regulatory debate that will regulate self-driving cars (for me, the biggest “problem” about self-driving cars.)

    The ultimate goal of the test is therefore to give to the artificial intelligence of the driverless cars the ability to make difficult choices with ethical implications. I’m curious to see how these ethical dilemmas will be solved in the meantime let’s enjoy our human-run car!

    Thank you guys and good luck for the selection! Whatever happens I found this forum interesting and stimulating for some exchange of ideas on topics of ours days, thanks again!

    Have a nice week!



    Hi guys,

    just a little issue, to date we received only 6 CVs (Luca Lostumbo, Stefano Garavaglia, Selene Delle Cave, Matilda Trevisan, Valentina Orrù, Eunice Curreri) out of 22 participants. I have to remind you that the deadline for that was on December 10th, and the english-format CV is strictly necessary in order to be considered eligible for the final ranking of Novara SVST 2019.

    So exceptionally, for the 19 participants left, the last call to send your curriculum is by tomorrow morning 9.00 a.m.

    Thank you for your cooperation, I really hope you won’t miss this huge opportunity only because of bureaucratic reasons.

    By for now,



    Hi Luca,

    I have already sent it some weeks ago, but now I’m presenting my cv again, let me know if you received it correctly please,




    Hi Luca,

    I have the same problem as Edoardo, I already sent it a few days ago but you may have not received it. I’m sending it again, let me know and thank you for your patience!



    Hi Luca,

    I have just sent my CV, let me know if you receive it, thank you!


    Paolo Marenco

    You guys and girls in the list  are the eligible to attend the Svst august 2019.

    Avogadro University and Fondazione Novara Sviluppo will inform you who of you will have the financial grant to cover partially the Tour expenses.

    The excluded from the support will have the possibility to attend the tour making a crowdfunding campaign -you have a lot of time – to cover the cost, like many studends in the past (look at Lorenzo Daidone SVST2018 tutorial.)

    Happy if you want to come tomorrow to our great reunion SVST 2018 and launch 2019, Bicocca University 3-6 pm

    1° Stefano Garavaglia

    Benedetta Savoini

    Luca Lostumbo

    Linda Farinello

    Eunice Curreri

    Eduardo Ceffa

    Valentina Orrù

    Sara Catto

    Selene Delle Cave

    10° Matteo Borella

    11° Matilda Trevisan

    12° Giorgio Galli

    Paolo Marenco

    Hey all, from today there is a group in Facebook for the future attendees of the SVST 2019 Tour.

    You can join from now, if you consider to make the Tour with or without the sponsorship of Avogadro University, in anycase.

    If interested ask me friendship in Facebook, I’ll invite you

Viewing 15 posts - 226 through 240 (of 245 total)
  • The topic ‘Novara Silicon Valley 2019’ is closed to new replies.