Home Forums Silicon Valley Study Tour – August 2019 Bicocca Silicon Valley 2019

140 replies, 13 voices Last updated by Serena 1 year, 5 months ago
  • Gianluca

    Hi Everyone! 😃

    I’m sorry to join the conversation this late, but I’m currently in Erasmus+ and I just discovered this project the day before yesterday through an iBicocca post.
    My name is Gianluca Belloni, I’m 21, and I’m in the third year of the “Marketing, comunicazione aziendale e mercati globali” degree course.
    I’m really passionate about technology and digital marketing, so I read a lot about those two subject, and i also produce some content (mainly videos) about the second. If you what to see some of them and give me feedback or advice, just tell me!

    Talking about the main topic of this conversation, AI, i just read all your comments, and I think we all agree about it.
    Like any other technology, AI is just a technology, and it can’t be good or bad by itself. What will determine the impact of AI on the world, will be the use that we, as human kind, make of it.
    Like @davidetoniolo I’m optimistic seeing the good steps that EU is taking on this subject. I also think that is really important to speak about this topic so that people can have his opinion and not only be frightened from the possible consequences of this innovation. And in doing that, the warnings of people like Stephen Hawking and Elon Musk are crucial.

    Probably the biggest fear about AI is that it can begin “thinking” by its own and start a revolution against us, just like in the film I, Robot (@danielafiorellino, I remember it because it’s one of my favorite movies!). The “Three laws of Robotics” can be really useful as a starting point to use, but in the film the AI starts the revolution just for them, seeing the humanity as a threat for the earth and themselves!

    Can Artificial Intelligence really start to think and have dreams? I don’t know, but i’m sure that it will shape our future, and it is already doing that. From self-driving cars, to virtual assistants, AI is already among us. UK police is also trying to use AI to prevent crimes. It is a great example of one of the many good uses that it could have.

    Thank you @marcopastore for sharing the video about the machine learning AI playing Breakout. It’s crazy seeing how it can actually learn from its mistakes and also understand the better way to beat the game in just 4 hours.

    To answer @francescatomasello ‘s question, it’s actually possible to use machine learning with games like GTA. Some time ago, i found a programmer that had created a self driving car in GTA V, called Charles. He has documented all the process in this playlist in his Youtube channel, and all the developing is summed up in this page. I found incredible seeing how the car became better at driving video after video (even if it still seems that the driver is a little drunk 🤣). Let me know if it surprise you too!

    I hope to hear from you soon,


    • This reply was modified 1 year, 9 months ago by Gianluca. Reason: @
    Jessica Amianto Barbato

    <span style=”font-weight: 400;”>Hi everyone, I’m Jessica Amianto Barbato. I’m a 22-year-old student graduated in Psychosocial Sciences of Communication and currently enrolled in the first year of Theory and Technology of Communication. I am among the founding members of Radio Bicocca and I’m currently taking care of its website and digital contents, writing posts and shooting videos at events.</span>

    <span style=”font-weight: 400;”>If I’m not studying or working, you’ll likely find me playing an instrument, writing, at the cinema or at concerts.</span>


    <span style=”font-weight: 400;”>I’m quite a creative mind and I’m always out there searching for something new to do; I fear I might be somehow allergic to spare time: I tend to fill any single free minute in my life with interesting activities to try and I am very curious about probably anything!</span>


    <span style=”font-weight: 400;”>Sorry for having joined the forum this late!</span>


    <b>ABOUT THE TOPIC</b>

    <span style=”font-weight: 400;”>First thing first, I have been doing some researches about AI as a tool to collect data about a website’s audience that shares content through the dark social. In a few words, dark social ( read more: </span><span style=”font-weight: 400;”>https://www.ibm.com/blogs/think/be-en/2018/05/08/marketing-dark-dark-social/</span&gt;<span style=”font-weight: 400;”> ) happens when users copy and paste an untraceable URL in a private chat (e.g. Whatsapp or Facebook Messenger) or on social networks in order to share it with their contacts. Those who manage the website need to find an alternative to the most traditional Google Analytics to get to know their audience and understand their online behaviour. In this article published on The Washington Post ( </span><span style=”font-weight: 400;”>https://www.washingtonpost.com/news/theworldpost/wp/2018/09/28/artificial-intelligence-3/?noredirect=on&utm_term=.55e02a68b9e0</span&gt;<span style=”font-weight: 400;”> ) the author deals with China being more advanced in AI researches than the United States and European countries (@davidetoniolo mentioned data analysis as a fruitful field for AI to operate), and with artificial intelligence capable of creating an artwork that elicits emotions, which sounds amazing to me!</span>

    <span style=”font-weight: 400;”>What I found to be peculiar is the social-network-oriented usage of AI, which deals with the impact of social media on democracy and trustworthy political news on Facebook. According to this, Italy has been involved in the study of a new approach to prevent fake news from spreading online thanks to some sort of reputation-ranking assigned to journalists (anyone thought of Black Mirror in here?). Also, going back to the dark social topic, the article puts forward the idea that AI would be a useful way to track misinformation that spreads through the encrypted Whatsapp environment. An anthropologist claims that many mob killings in India (here is some news coverage if you don’t know what I’m talking about > </span><span style=”font-weight: 400;”>https://www.nytimes.com/interactive/2018/07/18/technology/whatsapp-india-killings.html</span&gt;<span style=”font-weight: 400;”> ) were fueled by the diffusion of fake news through Whatsapp and that, therefore, “</span><i><span style=”font-weight: 400;”>a crowdsourced system managed by human moderators</span></i><span style=”font-weight: 400;”>” could monitor problematic content that users are forwarding each other. </span>


    <span style=”font-weight: 400;”>That being said, I know that this might sound as “invasion of privacy” to many of you and that’s where the worries expressed by Sundar PIchai start to make sense: would democracy be at risk in such a controlled scenario? Diving deeper into the matter, would it be too big a loss if what you got was safety? Let me explain, I think I would consider the option of giving away a little part of my privacy if fake news generated crime, but what kind of content would be censored? Would they try to shape my political views? </span>


    <span style=”font-weight: 400;”>I found an article by IBM (</span><span>https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/building-trust-in-ai.html </span>) that considers both the negative and the positive sides of AI, claiming that it would be essential to build trust in these systems and to educate people about how they work and how they can benefit from them. I think, in fact, that it is also important to take into account how the public will accept such a huge innovation; what I fear the most is that, once an ethical usage of AI is achieved, let me say, government-side (or even company-side if we consider the huge improvement in marketing that AI could lead to), people won’t be too fond of relying on it.

    <span style=”font-weight: 400;”>We all know that governments are aware of the ethical problem artificial intelligence brings along, but are they considering the possible opposition to the new technology?</span>


    <span style=”font-weight: 400;”>Coming to the project you asked us to comment about, I think it’s such a forward looking idea! On my side, I would totally love to be able to actively participate in policymaking but I also conceive a future where people are skeptical about the practical benefits of AI. In my opinion, the only way to build awareness of how AI works is education and shifting the talking from what governments look forward to doing with it to what people actually think and know about artificial intelligence could ease its usage in the future. Clearly I’m talking about educating the new generations who will potentially witness a widespread presence of AI in their life; what could the government do to make people conscious about what happens behind the scenes of artificial intelligence? How can it increase the trust in new technologies?</span>

    <span style=”font-weight: 400;”>Strolling through your comments, I read @danielafiorellino asking whether AI will make us lazy (maybe more lazy than we already are) or even replace us. Those questions made me realize that I cannot really foresee a future that resembles a sci-fi scenario at its core; I struggle to imagine a world where machines take control over humans (my professor always says that machines are stupid and it’s the human component that makes them “intelligent”; I don’t know if AI makes the case, but I kind of back him in saying that we would never give up on being the most intelligent beings on earth). Sure AI will make things easier, but we’ll find a way to value even the smallest of our effort.</span>


    Hi guys, my name is Valentina, I am 36 years old. I am a student-worker and also a mother. I am a Communication science graduate and decided to keep on studying Law at University, in order to unlock more job opportunities. I am dynamic and awake, I like reading, traveling, visiting museums and listening to good music. I expect from the SVST project to get to know new people and new socio-cultural contexts, as well as diving deeper on iussues concerning the digital world and therefore the future.

    • This reply was modified 1 year, 9 months ago by Valentina.

    Hi everyone!
    I am a new entry of the group, I would like to be very active on this group and I thank Marco Pastore for his work as a moderator.

    About an hour ago I put my short description, I hope not to be discriminated against my age, I have never had this kind of experience abroad and I would be very happy to be able to have this possibility. For almost a year I have carefully considered all the proposals of iBicocca, and the “Silicon Valley 2019” project would be a great achievement for me

    Regarding the topic of discussion. Google CEO Sundar Pichai, in an interview with the Washington Post pointed out on AI “I believe that the world of technology must realize that we can not only build and then, if anything fix and correct what is wrong, so it does not work”, in fact, even if we are at the dawn of the use of AI there are clear cases of non-positive examples that concern what initially seemed to support the normal human activity. For example, from self-driving cars that might cause fatal accidents to rigged elections, these are not just threats but real risks. In 2018, in fact, the emblematic example of self-driving cars causing accidents, has led to advancing serious concerns whether there is actually still an actual need for a human being to supervise AI, and not only in terms of building ethical protocols. Equally, in view of the upcoming US presidential elections in 2020, a serious issue might arise on the use of new technologies that could compromise information (the Russiagate case has already been serious enough in this respect, Russiagate an ongoing investigation still conducted by the special prosecutor Mueller where it is possible to find elements of the incriminating cases of espionage in favor of foreign powers, the espionage being done by means of information technology).

    Other very sensitive areas to keep an eye on in 2019 are the facial recognition, already used by different police forces, which can lead to great violations of privacy, and so-called ‘deep fakes’, fake videos created with AI so perfect as to seem real . Another disturbing aspect is related to racial and gender discrimination inherent in algorithms, which has emerged from several studies, and which can create problems for example if AI is used for job selection.

    A future in which AIs take the upper hand over man is still far away, but their increasing popularity in recent years makes them dangerous, especially for the use that can be made of them.
    Have a nice day.
    Hope to hear from you soon,
    Valentina Suffia

    Marco Pastore

    Hello to all the new comers!

    In particular hello to @Valentina, we are glad to have a mother here among us and I can tell you for sure that the evaluation will be based only on thoughts expressed in the comments and not on the age!

    I want to introduce a second argument on the forum.
    For candidates that will join the discussion after my post I want to tell you that you can express your opinion starting from this new argument without considering the previous one (although the second one is just one little frame of  the more general argument discussed first).

    Look at this incredible video about use of facial recognition in Chinahttps://www.youtube.com/watch?v=lH2gMNrUuEY

    Then express what do you think about.
    How can we decide where to put the limit between order & safety from one side and surveillance and privacy from the other?

    Let’s discuss!


    Marco Pastore


    Hello to all the newcomers, finally we are growing!

    @marcopastore I already had watched the sad video you link here. Nothing to add to the awful reality of today’s China. What is scary is the possibility of a diffusion of the technology in other countries, I think that such a system should be forbidden by law at least in the European Countries, if not above: the impossibility of such an extensive and detailed surveillance system from a government should be a basic human right, stated in the Human Right Declaration. As far as I know, there is nothing detailed on this argument inside it, as of today.

    If it were added, China wouldn’t care less, as its leaders have already violated the Declaration in the past, but it would have positive influence on us as western people, giving to the right to privacy the status of a fundamental value in our eyes. It would be a prevention from having an implementation of the system in our homes, for any government utilizing it would have a short life in the public opinion. The future of Chinese citizens is grim, especially for the minorities that are already oppressed.

    As for the fake news problem, I’d like to add my two cents on @jessinthebox96 post. Managing and eradicating them is a priority for the modern digital word, as we all have already witnessed the great damages that they can create, particularly in politics. But the most important thing is the birth of a culture of the real, truthful news and of a public sensitivity on the topic. The instrument that allow users to check the “fake rating” of a news of other people’s posts or of their own before posting them, should be optional and non invasive. Checking whatever a tweet is truthful or not, should be an action that comes from the user’s mind and if there’s a permanent fake news warning all of it would feel like an imposition. The same applies to post writing.

    In private messaging, the fake news control should happen only if I go to the external link of the news, not before. Nothing and nobody should know that I have that link in my chat, or who sent me, whom I’m sending it or any other information. Private chats should be completely private, that’s (part of)  the reason they are encrypted. (Sadly through, Whatsapp belongs to Facebook and as a default setting creates back ups on Google Drive, which kind of defeats the privacy purpose..)

    Waiting to hear you back,




    • This reply was modified 1 year, 9 months ago by Davide.
    • This reply was modified 1 year, 9 months ago by Davide.
    • This reply was modified 1 year, 9 months ago by Davide.
    Francesca Tomasello

    Hello everyone!

    Glad to read lots of interesting comments belonging to the new joiners! The topic is now really widespread. Before entering into a new discussion, since @gianlucabelloni advised me to watch some videos, I want to express my surprise because I can’t even believe that in such a complicated game as GTA (not as easy as Breakout) the machine learning is ready to become an expert during driving a motorcycle or a car (and I agree with you when saying that the driver seems a little drunk 🤣). The possibility of learning and absorb new instruction is endless!

    Regarding to @marcopastore‘s video, I think that the use of facial recognition is a bad solution in general, not only in China. The motto “If you have nothing to hide you have nothing to fear” is used, and it’s a excuse in order to legitimate the approval of this approach. Security is an argument that can’t be pushed aside but in this way, the risk of using facial recognition is the loss of our privacy and our freedom to do and act like we are willing to. In particular, in China, where comunism and totalitarianism are best friends, the danger is even more frightening. In a country where people who express different opinions above politics and the Government are persecuted, where Facebook (as other common western websites) is banned, where ethnic minorities  are obliged to work in labour camps and so on, this part of Artificial Intelligence is surely leading to more and more restrictions and problems.

    I agree with @davidetoniolo on the fact that European Countries should not adopt this method but unfortunately, London is on the way to implement the Facial recognition, but as I’ve read on the net, it seems that their technology at the moment is not working very well. This topic is really “work in progress” because even Governments and third parts as supranational organisations need to legislate above this theme, but if they do not as soon as possibile, there will be a lack of privacy and surely lots of data (our faces, literally) will be stolen or used improperly (I do not have to remind you of Facebook and the selling of users’ data…).

    Connected to this, I want to mention Amazon’s first experience Amazon GO, which is the fusion between AI and retail, as introduced a combination of various technologies in order to shop without standing on the line to pay, in fact the name is “Just Walk Out Technology”. In this case, if used properly, facial recognition can be an aid. But where’s the line between using this innovation properly or not? Would you feel secure or not when withdrawing your cash at the bank or paying by walking out from a supermarket? What do you think about it?

    See you soon,

    Francesca Tomasello 


    Good morning to everyone and welcome to the new members!

    Thank you @marcopastore to introduce a new topic, facial recognition is another intresting but crucial one.

    Seeing the video I was quite scared how this system is gwowing so fast. In according to YITU the facial recognition system would make “the world” healthier..but how? I totally agree with @francescatomasello , think if they actually understand by technology our thoughts and emotions, collect our data and sell them to marketing industries..It would be catastrophic. About Facebook case, this social network has already implemented a soft type of facial recognition on photos we post.

    Personally I am not ready to use this innovated system giving away my last piece of privacy. I agree with @davidetoniolo and Francesca that is very important EU makes an appropriate law to guarantee our freedom and privacy. By the way this new system would be a help in a certain way but I don’t think we are already conscious to undestrand when it is right using it, and as I said before there are many economic-political interests behind. Maybe it would be better if Governaments give training about correct use and risks once the system is about to be launched in Countries.

    But how many investments there are in facial recognitions? But overall how is the value of a start-up in facial recognition? I say millions and millions and a real new rich market is being created and is going to grow day by day. According to a Wired article YITU startup is worth $300 millions and has more than 300000 employees. Can we still call it a startup? And there are many more startups who worth more and receive more investments than Chinese one.

    Going on with the topic I found an interesting video talking about the replacement of human newsreaders in China. It consists to use the faces of two models and the voice of a robot to read news in Chinese and in English. How crazy it is? Yes it is only the beginnig and a lots of improvements need to be done. By the way Xinhua news agency has launched another threat in the job market; someone says we have not be scared about the disrupting of the old professional jobs because new ones are about to come, but at the same time I don’t think all the people in the future would be only digital/mathematical and statistic jobs lovers.

    Regarding Amazon Go video..it is really stunning!!  It would be very interesting to know how the machine learning could “understand” when and what are you taking from the shelf and when are you put it back on. For sure we will save a lot of time and it would be cool to try at least one time but at the same time I don’t think I’d feel comfortable using it. Seeing how Amazon is diversificate its business and how it’s essential the use of a mobile, I am waiting to see the launch of an Amazon mobile phone with its own browers ahah (maybe it is more real of what it could appear..)

    Waiting to know what are you thinking about facial recognition,



    Hi everyone,
    I thank Marco Pastore for inserting a new topic for discussion. It was interesting to read the points of view of you, my colleagues – I hope in the future also friends, @francescatomasello @davidetoniolo and Francesca.

    I watched the video on the terrible reality of China today, to be honest the soundtrack itself sounds very disturbing and I have a degree in communication … The facial recognition is an aspect of the new technology that actually poses several questions of purely legal matter (and in this case my legal background comes in handy):

    – privacy;

    – the limits to privacy in favor of other juridically higher-level interests;

    – the legal discipline that should govern the technology;

    – supranational bodies which should guarantee a rule of law not only in Europe but also worldwide.

    I make a brief introduction, Alessandra, a dear friend of mine who lives in Cina whom I exchanged ideas with, told me that the reality described in the video is everyday reality.

    Orwell’s 1984 comes easily to mind for the futuristic perspective and for the negative aspects that technology brings as a result, in fact the lack of privacy is easily recognizable with the Big Brother who controls everyone and everything.

    At this point it is necessary in my opinion to take a step back: through social networks like facebook, twitter and instagram, people give for free part of information concerning their personal lives – daily activities, hobbies, thoughts, political ideas. People share moments of family life with videos and photos.
    In my opinion technology without “education” could become tyranny if not governed, in fact it is not possible to stop progress, but at least limit it to rules. The law in this regard could have a decisive role.

    Children in schools could be educated about elementary technology through a ministerial program established on an ad hoc basis and valid for everyone.

    As claimed by Google CEO Sundar Pichai we should establish ex ante what are the rules, another word to be used sparingly is “limit”, perhaps constitutionally oriented limits in a democratic key.

    China unfortunately does not seem to have this perspective, there are in fact areas of the country that are strongly advanced and others still developing.

    A State that allows a massive exploitation of the poor classes of the country by granting few rights is not to be considered “democratic” in the strict sense.
    As a lawyer I want to highlight that even in Italy the same privacy has had to give way to “transparency”, for example, the legislator has often had to make balances of interests: transparency has in recent years a greater role than privacy itself. In this regard, I recall the Cons judgments. State, Sec. VI, April 20, 2006, n. 2223, Cons. State, A.P., 18 April 2006, n. 6.

    Another important balance between privacy and the right to information in which the Court of Cassation still had to rule with the sentences: n. 10510/2016 of 20/05/2016.

    Law is evolving as society, habits, public opinion evolve.

    In short, talking about privacy is not really that simple, but it is certainly a path that must be implemented above all with reference to technology, AI and data stored online.

    It will be my concern to write again about it because the topic is very interesting and the legal aspects can be useful to everyone.

    Thanks for your attention, greetings to everyone.

    Valentina Suffia


    Good afternoon everyone!

    @francescatomasello I’m glad you enjoyed those videos!

    Thank you @marcopastore for the new interesting topic.
    I can see that you are all highly sceptical about the facial recognition technology, and i can’t disagree with you. Indeed I can’t think about a positive application of this technology if not maybe in places like airports and stadium, were there are a lot of people and where there’s a higher chance of terrorist attacks.

    I’m not an expert about china’s socio-political condition, but i know for sure that is a really closed nation, with a strong state control over citizens. Hearing @valentina ‘s friend testimony confirming that the situation in the video is everyday reality is really sad. This kind of control power can be very dangerous. I agree with you all when you said we need a supranational organization with the power to monitor that the governments don’t use new technologies in the wrong way. But i also realize that it could be an utopian way of thinking, countries will never grant this power of control to another external organization.

    Maybe at least in the EU we can feel a little bit safer under this aspect because we are moving in the right way to protect people’s data, even if slowly. For example the “new” GDPR is a good step in protecting European citizens. But it can only be applied in the EU. (The GDPR is the reason why now you see pop-ups that ask you to accept cookies before starting to navigate in almost every website). @valentina i see that you are more experienced in this field, you know if the GDPR says something about facial recognition data?

    Having said that, i am really convinced that we have a huge hole about new technologies use and data protection. People with a lot of money can now literally control people’s way of thinking. Last US presidential elections are just an example. Have you seen this video of the Cambridge Analytica CEO? (it was before the Facebook scandal). The pride with which he speaks about how they used data to change people’s opinion is disturbing for me. The chirurgical precision with which is possible to target people is out of control, and i’m saying this as a marketer. Giving better and personalized ads is really in the interest of the client? For me, not to this point. I agree with you when you said that we need to take a step back and start educating people about this tools, and why they are free.

    About the video you posted,
    @serenavineis I find that the level of realism in the artificial newsreaders faces movements is impressive. Maybe he can replace one person, but how many programmer worked on that project? Probably a lot, and also some people that were not so technical.
    @francescatomasello Amazon Go is surely a great idea. I’d also like to try it one time, but probably there’s still a lot of work to do before it can become a real scalable project. For now is a really good marketing moves.

    Lastly i have a question for you, You think that we have to fear about biometric data we are giving everyday to companies? For example: i think that nowadays we all unlock our smartphone with the fingerprint or face unlock. Don’t you think that those companies can use those data as leverage to obtain government collaboration, or worst, sell them? Technically not, but… who knows?

    Have a good day,

    Lorenzo Daidone

    Hi everybody!

    I’m Lorenzo Daidone, I attended the Silicon Valley Study Tour this summer. This experience changed my life and my career.
    I would like to share with you my crowdfunding story. When I was selected to partecipate in the experience I was really excited but I wasn’t able to pay it…but in the end I went to California!

    For many of you the crowdfunding way could be a good opportunity.

    How did I do? Find out more at: https://youtu.be/eY5UxCM_Ras


    Hello everyone, good afternoon!
    @ Gianluca I think this new technology like YITU could have some positive aspects.
    In fact it could be used in stadiums, in airports and for any kind of transport.
    In this case the aim is security, so my question is: “Should the Government, the Minister of the Interior, the Minister of Transport and the Minister of Defense regulate such a power?
    Another situation is to use facial recognition for payments, for example, you can get into a café or supermarket and you don’t need cash or credit card. a
    Also in this case the question is: “who will manage it? The Minister of Economy?
    In short, large sceneries open up before us.
    Some of you suggested we need a supranational government, also for the resolution of other serious problems of common interest such as: environment, demography, human rights, etc. Are we enlightened enough to look beyond national or European borders for the sake of everyone’s well-being? Are we ready to sacrifice our status quo?
    Every day we give every sort of data about us online for free and the marketing specialists rub their hands with our information such as favorite programs, companies or political opinions.
    I think that the Social Network winks at our vain and wish to perform as circus animals to be the numbers one. For being the number one, we need method, respect and good education. We can learn everything else on books, listening to someone else’s experience at work with humility and dedication. AI could also lead to doubtful sceneries of moral value, without considering the principle of equality also enshrined in the constitutional charter of art. 3 or the principle of defense art. 24 Cost.
    In the next speech I insert a small research on GDPR as requested by @gianluca in order to describe European situation on data protection nowadays.
    The situation created in the United States for the upcoming presidential elections was at the limit of democracy.

    I send you this link:
    I also thank @francescatomasello for opening my eyes on amazon Go!

    A beautiful project that, as Marco Montemagno said, can open our mind to the new works of the future and how we must adapt and keep up with the times.

    Greetings to everyone.



    Thanks @lorenzodaidone for the specifics on crowdfunding 🙂


    Good morning guys,

    Amazon Go seems to me way more important for Amazon than for the customers. Although saving a few minutes in the payment process would be nice, in exchange for that they would obtain the most valuable identification tool: your face. It is also true that if you have a fidelity card you’re already tracked in your purchases even today, so in the privacy metrics the implementation of facial recognition wouldn’t be a major step back. Objectively there isn’t a great privacy loss from what we are experiencing today, but I really don’t like the idea of being constantly watched while I’m in a supermarket.

    During the last week a plethora of associations, of the most diverse backgrounds and purposes, signed three open letters to the CEOs of Microsoft, Google and Amazon: it appears that recently some US agencies, like FBI, have had contacts with private companies for a facial recognition AI. Microsoft and Google refused the offer, acknowledging the risks of an application of the technology by government would create. Amazon, on the other hand, seems determined in licensing his Recognition software.

    The first two letters prize Microsoft and Google’s engagement in social issues and willingness to look beyond money, the thirds takes a hard stance against Amazon’s proceedings, particularly after even groups of its own employees have asked the company not to sell the technology to governments.

    Here‘s the article that I first read, and the letters for Google, Microsoft and Amazon.

    While I don’t question that such a technology would have immense advantages in the fight against terrorisms and fleeting criminals, there are also unquestionable possibilities for overuse and over surveillance.

    I’d like to read your take on the issue, waiting to hear you back,
    Davide Toniolo

    • This reply was modified 1 year, 9 months ago by Davide.
    • This reply was modified 1 year, 9 months ago by Davide.

    P.S. @paolomarenco explained it wonderfully at the last December’s conference in Bicocca, but I didn’t write down the details so I forgot them. Does anybody remember when will the selections for the SVST take place? Also, to which tour would the happy winners take part, the first or the second one of August?

    Sorry for my goldfish-like memory,

    • This reply was modified 1 year, 9 months ago by Davide.
    • This reply was modified 1 year, 9 months ago by Davide.
Viewing 15 posts - 16 through 30 (of 141 total)
  • The topic ‘Bicocca Silicon Valley 2019’ is closed to new replies.