Home Forums Silicon Valley Study Tour – August 2019 Bicocca Silicon Valley 2019

23 replies, 10 voices Last updated by  Valentina 1 hour, 44 minutes ago
  • Gianluca
    Participant
    @gianlucabelloni
    #3281

    Hi Everyone!¬†ūüėÉ

    I’m sorry to join the conversation this late, but I’m currently in Erasmus+ and I just discovered this project¬†the day before yesterday¬†through an iBicocca post.
    My name is Gianluca Belloni, I’m 21, and I’m in the third year of the “Marketing, comunicazione aziendale e mercati globali”¬†degree course.
    I’m really passionate about¬†technology and digital marketing, so I read a lot about those two subject, and i also produce some content (mainly videos) about the second. If you what to see some of them and give me feedback or advice, just tell me!

    Talking about the main topic of this conversation, AI, i just read all your comments, and I think we all agree about it.
    Like¬†any other technology, AI is just a¬†technology, and it can’t be good or bad¬†by itself. What will determine¬†the impact of AI on the world, will be the use that we, as¬†human kind, make of it.
    Like @davidetoniolo I’m¬†optimistic seeing the good steps that EU is taking on this subject. I also think that is really important to speak about this topic¬†so that people can have his opinion and not only be frightened from the possible consequences of this innovation. And in doing that, the warnings of people like Stephen Hawking and Elon Musk¬†are crucial.

    Probably the biggest fear about AI is that it can begin “thinking” by its own and start a revolution against us, just like in the film¬†I, Robot¬†(@danielafiorellino, I remember it because it’s one of my favorite movies!). The “Three laws of Robotics” can be really useful as a starting point to use, but in the film the AI starts the revolution just for them,¬†seeing the humanity as a threat for the earth and themselves!

    Can Artificial Intelligence really start to think and have dreams? I don’t know, but i’m sure that it will shape our future, and it is already doing that. From self-driving cars, to virtual assistants, AI is already¬†among us. UK police is also trying to use AI to prevent crimes. It is a great example of one of the many good uses that it could have.

    Thank you @marcopastore for sharing the video about the machine learning AI playing Breakout. It’s crazy seeing how it can actually learn from its mistakes and also understand the better way to beat the game in just 4 hours.

    To answer @francescatomasello ‘s question, it’s actually possible to use machine learning with games like GTA. Some time ago, i found a programmer that had created a self driving car in GTA V, called Charles. He has documented all the process in¬†this playlist¬†in his Youtube channel, and all the developing is¬†summed up in this page.¬†I found incredible seeing how the car became better at driving video after video (even if it still seems that the driver is a little drunk¬†ūü§£). Let me know if it surprise you too!

    I hope to hear from you soon,

    Gianluca

    • This reply was modified 4 days, 17 hours ago by  Gianluca. Reason: @
    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3294

    <span style=”font-weight: 400;”>Hi everyone, I‚Äôm Jessica Amianto Barbato. I‚Äôm a 22-year-old student graduated in Psychosocial Sciences of Communication and currently enrolled in the first year of Theory and Technology of Communication. I am among the founding members of Radio Bicocca and I‚Äôm currently taking care of its website and digital contents, writing posts and shooting videos at events.</span>

    <span style=”font-weight: 400;”>If I‚Äôm not studying or working, you‚Äôll likely find me playing an instrument, writing, at the cinema or at concerts.</span>

     

    <span style=”font-weight: 400;”>I‚Äôm quite a creative mind and I‚Äôm always out there searching for something new to do; I fear I might be somehow allergic to spare time: I tend to fill any single free minute in my life with interesting activities to try and I am very curious about probably anything!</span>

     

    <span style=”font-weight: 400;”>Sorry for having joined the forum this late!</span>

     

    <b>ABOUT THE TOPIC</b>

    <span style=”font-weight: 400;”>First thing first, I have been doing some researches about AI as a tool to collect data about a website‚Äôs audience that shares content through the dark social. In a few words, dark social ( read more: </span><span style=”font-weight: 400;”>https://www.ibm.com/blogs/think/be-en/2018/05/08/marketing-dark-dark-social/</span&gt;<span style=”font-weight: 400;”> ) happens when users copy and paste an untraceable URL in a private chat (e.g. Whatsapp or Facebook Messenger) or on social networks in order to share it with their contacts. Those who manage the website need to find an alternative to the most traditional Google Analytics to get to know their audience and understand their online behaviour. In this article published on The Washington Post ( </span><span style=”font-weight: 400;”>https://www.washingtonpost.com/news/theworldpost/wp/2018/09/28/artificial-intelligence-3/?noredirect=on&utm_term=.55e02a68b9e0</span&gt;<span style=”font-weight: 400;”> ) the author deals with China being more advanced in AI researches than the United States and European countries (@davidetoniolo mentioned data analysis as a fruitful field for AI to operate), and with artificial intelligence capable of creating an artwork that elicits emotions, which sounds amazing to me!</span>

    <span style=”font-weight: 400;”>What I found to be peculiar is the social-network-oriented usage of AI, which deals with the impact of social media on democracy and trustworthy political news on Facebook. According to this, Italy has been involved in the study of a new approach to prevent fake news from spreading online thanks to some sort of reputation-ranking assigned to journalists (anyone thought of Black Mirror in here?). Also, going back to the dark social topic, the article puts forward the idea that AI would be a useful way to track misinformation that spreads through the encrypted Whatsapp environment. An anthropologist claims that many mob killings in India (here is some news coverage if you don‚Äôt know what I‚Äôm talking about > </span><span style=”font-weight: 400;”>https://www.nytimes.com/interactive/2018/07/18/technology/whatsapp-india-killings.html</span&gt;<span style=”font-weight: 400;”> ) were fueled by the diffusion of fake news through Whatsapp and that, therefore, ‚Äú</span><i><span style=”font-weight: 400;”>a crowdsourced system managed by human moderators</span></i><span style=”font-weight: 400;”>‚ÄĚ could monitor problematic content that users are forwarding each other. </span>

     

    <span style=”font-weight: 400;”>That being said, I know that this might sound as ‚Äúinvasion of privacy‚ÄĚ to many of you and that‚Äôs where the worries expressed by Sundar PIchai start to make sense: would democracy be at risk in such a controlled scenario? Diving deeper into the matter, would it be too big a loss if what you got was safety? Let me explain, I think I would consider the option of giving away a little part of my privacy if fake news generated crime, but what kind of content would be censored? Would they try to shape my political views? </span>

     

    <span style=”font-weight: 400;”>I found an article by IBM (</span><span>https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/building-trust-in-ai.html </span>) that considers both the negative and the positive sides of AI, claiming that it would be essential to build trust in these systems and to educate people about how they work and how they can benefit from them. I think, in fact, that it is also important to take into account how the public will accept such a huge innovation; what I fear the most is that, once an ethical usage of AI is achieved, let me say, government-side (or even company-side if we consider the huge improvement in marketing that AI could lead to), people won’t be too fond of relying on it.

    <span style=”font-weight: 400;”>We all know that governments are aware of the ethical problem artificial intelligence brings along, but are they considering the possible opposition to the new technology?</span>

     

    <span style=”font-weight: 400;”>Coming to the project you asked us to comment about, I think it’s such a forward looking idea! On my side, I would totally love to be able to actively participate in policymaking but I also conceive a future where people are skeptical about the practical benefits of AI. In my opinion, the only way to build awareness of how AI works is education and shifting the talking from what governments look forward to doing with it to what people actually think and know about artificial intelligence could ease its usage in the future. Clearly I’m talking about educating the new generations who will potentially witness a widespread presence of AI in their life; what could the government do to make people conscious about what happens behind the scenes of artificial intelligence? How can it increase the trust in new technologies?</span>

    <span style=”font-weight: 400;”>Strolling through your comments, I read @danielafiorellino asking whether AI will make us lazy (maybe more lazy than we already are) or even replace us. Those questions made me realize that I cannot really foresee a future that resembles a sci-fi scenario at its core; I struggle to imagine a world where machines take control over humans (my professor always says that machines are stupid and it’s the human component that makes them ‚Äúintelligent‚ÄĚ; I don’t know if AI makes the case, but I kind of back him in saying that we would never give up on being the most intelligent beings on earth). Sure AI will make things easier, but we’ll find a way to value even the smallest of our effort.</span>

    Valentina
    Participant
    @valentina
    #3297

    Hi guys, my name is Valentina, I am 36 years old. I am a student-worker and also a mother. I am a Communication science graduate and decided to keep on studying Law at University, in order to unlock more job opportunities. I am dynamic and awake, I like reading, traveling, visiting museums and listening to good music. I expect from the SVST project to get to know new people and new socio-cultural contexts, as well as diving deeper on iussues concerning the digital world and therefore the future.

    • This reply was modified 2 days, 8 hours ago by  Valentina.
    Valentina
    Participant
    @valentina
    #3299

    Hi everyone!
    I am a new entry of the group, I would like to be very active on this group and I thank Marco Pastore for his work as a moderator.

    About an hour ago I put my short description, I hope not to be discriminated against my age, I have never had this kind of experience abroad and I would be very happy to be able to have this possibility. For almost a year I have carefully considered all the proposals of iBicocca, and the ‚ÄúSilicon Valley 2019‚ÄĚ project would be a great achievement for me

    Regarding the topic of discussion. Google CEO Sundar Pichai, in an interview with the Washington Post pointed out on AI “I believe that the world of technology must realize that we can not only build and then, if anything fix and correct what is wrong, so it does not work”, in fact, even if we are at the dawn of the use of AI there are clear cases of non-positive examples that concern what initially seemed to support the normal human activity. For example, from self-driving cars that might cause fatal accidents to rigged elections, these are not just threats but real risks. In 2018, in fact, the emblematic example of self-driving cars causing accidents, has led to advancing serious concerns whether there is actually still an actual need for a human being to supervise AI, and not only in terms of building ethical protocols. Equally, in view of the upcoming US presidential elections in 2020, a serious issue might arise on the use of new technologies that could compromise information (the Russiagate case has already been serious enough in this respect, Russiagate an ongoing investigation still conducted by the special prosecutor Mueller where it is possible to find elements of the incriminating cases of espionage in favor of foreign powers, the espionage being done by means of information technology).

    Other very sensitive areas to keep an eye on in 2019 are the facial recognition, already used by different police forces, which can lead to great violations of privacy, and so-called ‘deep fakes’, fake videos created with AI so perfect as to seem real . Another disturbing aspect is related to racial and gender discrimination inherent in algorithms, which has emerged from several studies, and which can create problems for example if AI is used for job selection.

    A future in which AIs take the upper hand over man is still far away, but their increasing popularity in recent years makes them dangerous, especially for the use that can be made of them.
    Have a nice day.
    Hope to hear from you soon,
    Valentina Suffia

    Marco Pastore
    Keymaster
    @marcopastore
    #3302

    Hello to all the new comers!

    In particular hello to @valentina, we are glad to have a mother here among us and I can tell you for sure that the evaluation will be based only on thoughts expressed in the comments and not on the age!

    I want to introduce a second argument on the forum.
    For candidates that will join the discussion after my post I want to tell you that you can express your opinion starting from this new argument without considering the previous one (although the second one is just one little frame of  the more general argument discussed first).

    Look at this incredible video about use of facial recognition in China: https://www.youtube.com/watch?v=lH2gMNrUuEY

    Then express what do you think about.
    How can we decide where to put the limit between order & safety from one side and surveillance and privacy from the other?

    Let’s discuss!

     

    Marco Pastore

    • This reply was modified 1 day, 17 hours ago by  Marco Pastore.
    Davide
    Participant
    @davidetoniolo
    #3304

    Hello to all the newcomers, finally we are growing!

    @marcopastore I already had watched the sad video you link here. Nothing to add to the awful reality of today’s China. What is scary is the possibility of a diffusion of the technology in other countries, I think that such a system should be forbidden by law at least in the European Countries, if not above: the impossibility of such an extensive and detailed surveillance system from a government should be a basic human right, stated in the Human Right Declaration. As far as I know, there is nothing detailed on this argument inside it, as of today.

    If it were added, China wouldn’t care less, as its leaders have already violated the Declaration in the past, but it would have positive influence on us as western people, giving to the right to privacy the status of a fundamental value in our eyes. It would be a prevention from having an implementation of the system in our homes, for any government utilizing it would have a short life in the public opinion. The future of Chinese citizens is grim, especially for the minorities that are already oppressed.

    As for the fake news problem, I’d like to add my two cents on @jessinthebox96 post. Managing and eradicating them is a priority for the modern digital word, as we all have already witnessed the great damages that they can create, particularly in politics. But the most important thing is the birth of a culture of the real, truthful news and of a public sensitivity on the topic. The instrument that allow users to check the “fake rating” of a news of other people’s posts or of their own before posting them, should be optional and non invasive. Checking whatever a tweet is truthful or not, should be an action that comes from the user’s mind and if there’s a permanent fake news warning all of it would feel like an imposition. The same applies to post writing.

    In private messaging, the fake news control should happen only if I go to the external link of the news, not before. Nothing and nobody should know that I have that link in my chat, or who sent me, whom I’m sending it or any other information. Private chats should be completely private, that’s (part of) ¬†the reason they are encrypted. (Sadly through, Whatsapp belongs to Facebook and as a default setting creates back ups on Google Drive, which kind of defeats the privacy purpose..)

    Waiting to hear you back,
    Davide

     

     

     

    • This reply was modified 1 day, 3 hours ago by  Davide.
    • This reply was modified 1 day, 3 hours ago by  Davide.
    • This reply was modified 1 day, 3 hours ago by  Davide.
    Francesca Tomasello
    Participant
    @francescatomasello
    #3310

    Hello everyone!

    Glad to read lots of interesting comments belonging to the new joiners! The topic is now really widespread. Before entering into a new discussion, since¬†@gianlucabelloni¬†advised me to watch some videos, I want to express my surprise because I can’t even believe that in such a complicated game as GTA (not as easy as Breakout) the machine learning is ready to become an expert during driving a motorcycle or a car (and I agree with you when saying that the driver seems a little drunk¬†ūü§£). The possibility of learning and absorb new instruction is endless!

    Regarding to¬†@marcopastore‘s video, I think that the use of facial recognition is a bad solution in general, not only in China. The motto “If you have nothing to hide you have nothing to fear” is used, and it’s a excuse in order to legitimate the approval of this approach. Security is an argument that can’t be pushed aside but in this way, the risk of using facial recognition is the loss of our privacy and our freedom to do and act like we are willing to. In particular, in China, where comunism and totalitarianism are best friends, the danger is even more frightening. In a country where people who express different opinions above politics and the Government are persecuted, where Facebook (as other common western websites) is banned, where ethnic minorities¬† are obliged to work in labour camps and so on, this part of Artificial Intelligence is surely leading to more and more restrictions and problems.

    I agree with¬†@davidetoniolo¬†on the fact that European Countries should not adopt this method but unfortunately, London is on the way to implement the Facial recognition, but as I’ve read on the net, it seems that their technology at the moment is not working very well. This topic is really “work in progress” because even Governments and third parts as supranational organisations need to legislate above this theme, but if they do not as soon as possibile, there will be a lack of privacy and surely lots of data (our faces, literally) will be stolen or used improperly (I do not have to remind you of Facebook and the selling of users’ data…).

    Connected to this, I want to mention Amazon’s first experience¬†Amazon GO, which is the fusion between AI and retail, as introduced a combination of various technologies in order to shop without standing on the line to pay, in fact the name is “Just Walk Out Technology”. In this case, if used properly, facial recognition can be an aid. But where’s the line between using this innovation properly or not? Would you feel secure or not when withdrawing your cash at the bank or paying by walking out from a supermarket? What do you think about it?

    See you soon,

    Francesca Tomasello 

    Serena
    Participant
    @serenavineis
    #3311

    Good morning to everyone and welcome to the new members!

    Thank you @marcopastore to introduce a new topic, facial recognition is another intresting but crucial one.

    Seeing the video I was quite scared how this system is gwowing so fast. In according to YITU the facial recognition system would make “the world” healthier..but how? I totally agree with @francescatomasello , think if they actually understand by technology our thoughts and emotions, collect our data and sell them to marketing industries..It would be catastrophic. About Facebook case, this social network has already implemented a soft type of facial recognition on photos we post.

    Personally I am not ready to use this innovated system giving away my last piece of privacy. I agree with @davidetoniolo and Francesca that is very important EU makes an appropriate law to guarantee our freedom and privacy. By the way this new system would be a help in a certain way but I don’t think we are already conscious to undestrand when it is right using it, and as I said before there are many economic-political interests behind. Maybe it would be better if Governaments give training about correct use and risks once the system is about to be launched in Countries.

    But how many investments there are in facial recognitions? But overall how is the value of a start-up in facial recognition? I say millions and millions and a real new rich market is being created and is going to grow day by day. According to a Wired article YITU startup is worth $300 millions and has more than 300000 employees. Can we still call it a startup? And there are many more startups who worth more and receive more investments than Chinese one.

    Going on with the topic I found an interesting video talking about the replacement of human newsreaders in China. It consists to use the faces of two models and the voice of a robot to read news in Chinese and in English. How crazy it is? Yes it is only the beginnig and a lots of improvements need to be done. By the way Xinhua news agency has launched another threat in the job market; someone says we have not be scared about the disrupting of the old professional jobs because new ones are about to come, but at the same time I don’t think all the people in the future would be only digital/mathematical and statistic jobs lovers.

    Regarding Amazon Go video..it is really stunning!!¬† It would be very interesting to know how the machine learning could “understand” when and what are you taking from the shelf and when are you put it back on. For sure we will save a lot of time and it would be cool to try at least one time but at the same time I don’t think I’d feel comfortable using it. Seeing how Amazon is diversificate its business and how it’s essential the use of a mobile, I am waiting to see the launch of an Amazon mobile phone with its own browers ahah (maybe it is more real of what it could appear..)

    Waiting to know what are you thinking about facial recognition,

    Serena

    Valentina
    Participant
    @valentina
    #3314

    Hi everyone,
    I thank Marco Pastore for inserting a new topic for discussion. It was interesting to read the points of view of you, my colleagues – I hope in the future also friends, @francescatomasello @davidetoniolo and Francesca.

    I watched the video on the terrible reality of China today, to be honest the soundtrack itself sounds very disturbing and I have a degree in communication … The facial recognition is an aspect of the new technology that actually poses several questions of purely legal matter (and in this case my legal background comes in handy):

    – privacy;

    – the limits to privacy in favor of other juridically higher-level interests;

    – the legal discipline that should govern the technology;

    – supranational bodies which should guarantee a rule of law not only in Europe but also worldwide.

    I make a brief introduction, Alessandra, a dear friend of mine who lives in Cina whom I exchanged ideas with, told me that the reality described in the video is everyday reality.

    Orwell’s 1984 comes easily to mind for the futuristic perspective and for the negative aspects that technology brings as a result, in fact the lack of privacy is easily recognizable with the Big Brother who controls everyone and everything.

    At this point it is necessary in my opinion to take a step back: through social networks like facebook, twitter and instagram, people give for free part of information concerning their personal lives – daily activities, hobbies, thoughts, political ideas. People share moments of family life with videos and photos.
    In my opinion technology without “education” could become tyranny if not governed, in fact it is not possible to stop progress, but at least limit it to rules. The law in this regard could have a decisive role.

    Children in schools could be educated about elementary technology through a ministerial program established on an ad hoc basis and valid for everyone.

    As claimed by Google CEO Sundar Pichai we should establish ex ante what are the rules, another word to be used sparingly is “limit”, perhaps constitutionally oriented limits in a democratic key.

    China unfortunately does not seem to have this perspective, there are in fact areas of the country that are strongly advanced and others still developing.

    A State that allows a massive exploitation of the poor classes of the country by granting few rights is not to be considered “democratic” in the strict sense.
    As a lawyer I want to highlight that even in Italy the same privacy has had to give way to “transparency”, for example, the legislator has often had to make balances of interests: transparency has in recent years a greater role than privacy itself. In this regard, I recall the Cons judgments. State, Sec. VI, April 20, 2006, n. 2223, Cons. State, A.P., 18 April 2006, n. 6.

    Another important balance between privacy and the right to information in which the Court of Cassation still had to rule with the sentences: n. 10510/2016 of 20/05/2016.

    Law is evolving as society, habits, public opinion evolve.

    In short, talking about privacy is not really that simple, but it is certainly a path that must be implemented above all with reference to technology, AI and data stored online.

    It will be my concern to write again about it because the topic is very interesting and the legal aspects can be useful to everyone.

    Thanks for your attention, greetings to everyone.

    Valentina Suffia

Viewing 9 posts - 16 through 24 (of 24 total)

You must be logged in to reply to this topic.