Skip to toolbar

Forum Replies Created

Viewing 13 posts - 1 through 13 (of 13 total)
  • Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3722

    Hi everyone!

    Here’s the English version of my LinkedIn profile: Jessica Amianto Barbato

    Have a nice day!

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3689

    Hi everyone!

    @valentina don’t worry, didn’t take it as personal criticism! On the contrary, I appreciated and totally agree with what you say but for someone who works, let me say, in the “media” industry I know how important it is for people to know both the right and the wrong, valid opinions and crazy statements. I consider the bad side of the Internet as some sort of vaccine (common sense vaccine meaning): you have to understand where people are wrong in order to strenghten your opinion (which can be more valid and thought out).

    I totally support what you say about being careful when it comes to scientific results. Still, I suggest you take into account the fact that I was not referring to DNA modifications; my last post was about DIY devices inserted in people’s skin that are not necessarily related to DNA modifications. In those terms, my comparison between biohacks and tattoos was meant to be provoking: I completely fear a future where science is overcome by non-scientific beliefs. Also, as I wrote some posts ago, biohacking is being made available to the  public (remember the company that sold DIY biohacking kits online?), turning it into something people might perceive as easily accessible, maybe even acceptable (which, I’d like to remark, I don’t think it is!). It was not about the outcomes, it was about the perception of the possibility of getting biohacked in such an easy way.

    Of course, I have myself pointed out the risks of eugenics (but it wasn’t the core topic of my last post), the risks of creating a divide between stronger and weaker people, and I was taking for granted the fact that not all that you see online (and not all that the Internet community considers positive) comes without risks. Smart bandages are the result of a research conducted by biotechnologists at the Almquist Lab; don’t know if you have given a look to their website, but they work for the Imperial College in London, therefore I assume their publications are quite reliable. Of course, they are aware of the countereffects of their innovative device, and even though they haven’t written about then, I guess we should at least give a bit of credit to their work.

    I really like to hear both sides of the story and use the opinions I consider silly to have the tools to fight them, that’s it!

    Have a nice day!

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3682

    Hi everyone!

    In response to @valentina about how Lepht Anonym could be a fun case to follow on Twitter: I won’t say that all that she does is valuable or trustable, or even worth your attention at all; my statement was meant to point out that she is keeping her followers updated about her biohacked self, and since we can read elsewhere about successful biohacking inventions, I just thought she might prove me wrong and manage to convince me that her creations actually work. That being said, I don’t approve of her DIY philosophy and I consider it quite dangerous, but the geeky side of me can’t help but being interested in what she does. I’m just following her profile the same way as I follow, let’s say, Donald Trump: I don’t want to be one of those people who express an opinion on topics they don’t know anything about; sometimes the POTUS makes me angry, sometimes his silly tweets make me laugh, most of the times I think he’s dumb, but this helps me in strengthening my arguments whenever the Trump topic is being discussed. I am not in favour of DIY biohacking but I want to take Lepht Anonym’s experience into account when forming an opinion on what I am talking about.

    I want to say that I totally share that curious interest in biohacking that @davidetoniolo talked about, even though I am not that sure I would never ever have a small device implanted in my body. At the moment I am both scared and attracted to biohacking, but I don’t have sufficient proofs that those alterations would be a benefit and not a curse for me. I also fear that one day, sooner or later, getting biohacked will be considered as easy as getting tattooed: think about what tattoos meant fifty years ago, how they represented criminal communities and how they were regarded as mutilations of someone’s sacred body in the Western Culture. Now it’s a trend to get inked and the original meaning of tattoos is completely lost.

    Also @davidetoniolo has all my support in claiming that biohackers should be either doctors, researchers or biologists (and, of course, biotechnologists). Among those who have the knowledge (and authorization) to work on, let’s say, biohacks, there’s Anna Stejskalová. The biotechnologist, along with the Almquist Lab in London, is working on DNA powered smart bandages: those innovative bandages are packed with microscopic DNA envelopes to help the cells in the skin perform better through each step of the healing process. To better explain how these devices work, they say:

    If you’re assembling Ikea furniture, you don’t want all the instructions at once. You’re liable to refuse instruction completely if all the steps must be given together. It’s better just to freestyle the construction than be comprehensively confused. Cells work much the same way. They need specific instructions at each step of the process. If one comes too soon, it’s useless.

    In few words, their bandages are able to “communicate” with skin tissues and regulate the process of healing in order to provide the correct instruction at the best moment. The cool part of the project is that they are planning on transposing the technology to the process of bone healing, so I assume they would need to implant some sort of device inside the patient’s body for it to work properly. I don’t know if this fits the definition of “biohacking” but I think it would be an amazing way of introducing this new field of study to the general public: everyone buys bandages, why not buying smart ones?

    Have a nice day!
    Jessica

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3559

    So here I am again, last point on the genetic engineering issue and on to the biohacking matter!

    I also wanted to speak my mind about Anna Cereseto claiming that He Jiankui’s experiment cannot be defined as “prevention” since we can never know whether someone will be exposed to the virus during their life. It is absolutely true that the scientist mostly wanted to play God with those kids’ genome to prove that human beings can defeat the tricks nature plays on them, and it is also undeniable that his move might compromise the results and works of many other researchers all around the world. On the contrary, I found Cereseto’s claim a bit out of focus: even nowadays the most effective preventive health behaviours are practiced before the subject gets ill. It wouldn’t be called “prevention” if it became effective once the disease has been contracted. Avert (which provides data and aims at educating people about HIV and AIDS) reports that 25% of HIV-positive people aren’t even aware of their status and could therefore spread the virus. Also the vast majority of people living with HIV are located in the poorest countries in the world (an estimated 66% living in sub-Saharan Africa). In my opinion, even though treating the virus after it’s been contracted is an amazing solution, all of these people might involuntarily contribute in spreading the disease whilst a solution like that of He Jiankui could prevent men, women, even children, from getting the virus at all. I don’t mean to say that I agree with what he did, because he did not take into account the possible negative outcomes of genetic engineering applied to embryos, but I think that, in some ways, he foresaw the chance to disrupt HIV and keep future epidemic episodes (think of what happened in the early 80’s when the disease wasn’t known yet) from happening. I guess something like that must be in the works for genetic dysfunctions, cancer and hereditary diseases as well but we’ll have to wait and see what the scientific community will unleash in the next few years.

    About Biohacking
    The article about humans turning themselves into cyborgs reminded me of transhumanism, which is a DIY approach to self-improvement achieved by putting tiny magnets and small devices under one’s skin. This woman, Lepht Anonym, biohacked herself experimenting first with RFID sensors under her skin, so that she could do things like lock a computer specifically to her signature, and went on practicing on her body to find the perfect material for those sensors (she even used hot glue). She now owns a blog and is very social-wise active on Twitter, even though she wants to keep her identity hidden; I guess she must be a fun case to follow up to see what if such homemade technologies are truly effective!

    I even read about people merging biohacking and CRISPR technologies with terrible outcomes, mostly because the guy who injected himself a DIY treatment for herpes had no medical knowledge and he even advertised a treatment for lung cancer patients that likeably involved CRISPR. Aaron Traywick has died after practicing on himself, and I guess he must’ve been just as crazy as He Jiankui, but I struggle to imagine how terrible it could’ve been if he had managed to put to test his lung cancer treatment!

    The article mentioned above is very interesting as for the potential benefits of using biohacking to make healthcare available for everyone (they are working on “homebrew medications”, basically open-source insulin available for free to anyone who needs it), which would be amazing, most of all in those countries in which healthcare is highly prohibitive. It also hints at the possibility of the FBI working side by side with biohackers on the development of a biosecurity system, which sounds rather scary to me!

    Thanks to the Harvard University website I found out that there’s a company, The Odin, that is already selling DIY CRISPR kits online and they even organize biohacking classes. Some things are rather harmless (like the glowing bacteria kit) but some other stuff makes me really question about what might happen in the near future. What frightens me the most is that the ethical matter is always brought up as something common people might be interested in discussing when it comes to biohacking, while biohackers themselves act carelessly of the ethical repercussions of their discoveries. They’re like “do as I say, not as I do”. It follows that, again, if you show people that they can easily achieve “perfection” and you keep doing so despite of the risks you’re taking, maybe most of the people will still be scared by the new technologies, but the community of biohackers (and, let me say, unexpert biohackers) will exponentially increase …
    At that point, if the deregulated achievements exceeded the failures, people could be convinced that biohacking is 100% safe and scientifically approved. For as much as I am attracted to this kind of innovations, I think education and proper communication should definitely, in my opinion, avert the trend.

    A funny fact about biohacking: @marcopastore posted about human cyborgs, but I found out that somewhere on a dairy farm in Wellsville, Utah, live three cyborg cows. A chip implanted in their bodies uses low-energy Bluetooth to transfer to a nearby station information about the cow’s chewing frequency, temperature, and general rambling around the farm. They used a device called EmbediVet by LivestockLabs. The startup’s CEO Tim Cannon saw a Lepht Anonym video in which she talked about sensors implanted in her fingertips and decided to try and upgrade himself by inserting a finger magnet into his hand. He then worked for a company that developed an AI-fuelled device that was meant to predict illnesses, but the majority of people wasn’t in having those things implanted in their bodies, most of all if it wasn’t for strictly proven medical reasons. Since then he has switched from human biohacking to cattle biohacking, and his project seems to be fruitful. Maybe agriculture and livestock could see their future shaped by this new attitude!

    I’m done for today, thanks for bearing with me and have a nice day!
    Jessica

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3555

    Hi everyone!

    Thanks for introducing this new topic!

    First of all I’d like to back @davidetoniolo ‘s concerns about He Jiankui experiments and the doubts he raises about the methods the scientist has used. I have no solid knowledge of the matter, so I had to make some researches in order to form a valid opinion on the issues related to genetic engineering, that’s when I tripped into this:

    https://www.youtube.com/watch?v=jAhjPd4uNFY

    This video provides a brief look into genetic engineering and it also shows that in the early 90’s, to treat infertility, babies were made that carried genetic information from 3 different human beings.

    When I first read the article about He Jiankui I found myself wondering how he could’ve turned his project into reality without being supported by an institution or a research centre; the scientist appears to be the only person accountable for the experiment and the whole situation looks a bit shady. The video above explains that since CRISPR has entered the stage in the gene editing scenario, the costs of engineering have shrunk to the point that virtually anybody with a laboratory (which of course He Jiankui had) can do it. In my opinion, the Chinese scientist is the proof of what might happen if this kind of technology got so popular that almost everybody could have access to it.

    The video also claims that in 2015 a group of scientists managed to cut the HIV virus out of living cells in patients who took part in the experiment and it seems that CRISPR is being taken into account as way of preventing and treating HIV and other immunodeficiency-related diseases. The difference between this kind of intervention and that of the Chinese scientist in @marcopastore ‘s article is that the former, let me say, “dies” with the person who carries it (therefore modified genes won’t be transmitted to the next generation of people) while the latter, which works on reproductive cells or very early embryos, “creates” humans who can pass modified genome on to their children that eventually could spread it over future generations.

    I think you should take the time to watch the video because it brings up some interesting points about what we should expect from genetic engineering in ethical terms; one thing it says, which is peculiar to me, is that “as soon as the first engineered kid is born, a door is opened that can’t be closed anymore”. I guess we have already reached that point in some ways; the idea is that once you have showed the world (not the scientific, academic world but common people who mostly get to know scientific innovations through biased media) that the most terrifying diseases can be not only cured but prevented, there’s no way back! You could never deny a kid with genetic predispositions to hereditary diseases a cure that involves CRISPR if it’s been done before. Genetic engineering could be used to treat human flaws rather than medical emergencies, ending up being considered as one of those commercial trends everybody seems to be attracted to. The video mentions achieving a faster metabolism, a better muscular structure and perfect eyesight so that, in the end, a new conception of perfection is established as a standard to look up to. Now try to imagine a world in which the wealthiest people can afford to defeat aging by undergoing CRISPR treatments …
    As Paolo Benanti said “medicine will no longer be an art that cares for the sick but a sort of marketing relationship between doctor (seller) and patient (client)” and we will create a world in which we will reject non-perfect human beings. I don’t think this is too far from being true even now: as @francescatomasello wrote, many women whose children are diagnosed with genetic diseases decide to end their pregnancy; for example in Europe, about 92% of all pregnancies where trisomy is detected are terminated, and even though I don’t feel like judging this decision, we cannot deny that it is basically a way of eliminating imperfections from our species.

    I think the only way to prevent people from abusing this new technology for futile purposes is to provide proper scientific communication. In popular culture science gets mistreated literally everyday; think of fad diets or the “we only use 10% of our brain” catchphrase, and it takes a decent amount of effort to debunk all these false myths. For what concerns scientific communication, I found this video to be a very well thought example of how hard it is for scientists to make people aware of what’s going on within the scientific community:

    https://www.youtube.com/watch?v=sweN8d4_MUg

    The message should customized to each strata of society in order to prevent people from thinking they can turn themselves into superheroes or whatsoever. Pay attention to how moderate the experts’ opinions are and how they always balance the positive achievements with their negative supposed side effects. Telling people the whole story might discourage their superficial interest in thoughtless self-improvement. Still it’s a hot topic to discuss, because we always need to take into account all the economical and political interests that interact with mere science (in the first video the speaker mentions a dystopic future in which North Korean governors use CRISPR to make the population perfect … Eugenics again!), but I think it’s the only field we can actively work on before this innovation becomes too popular to be put aside!

    I am ending this post here, but there’s another one coming; sorry in advance for double posting, but I was writing waaaay too much!

    Hope to hear from you soon!
    Jessica

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3522

    @gianlucabelloni interesting readings the ones you shared! I totally back you when you claim Elon Musk is overstating the state of art when it comes to self-driving cars, but I appreciated his point on how people might manipulate his statement: “people sometimes will extrapolate that to mean now it works with 100 percent certainty, requires no observation, perfectly. This is not the case”. It’s clearly a smart move on his part to protect himself from allegations of having put somebody’s life at risk with his driverless cars, but it’s also a way to make it obvious that claiming that something “works” it’s not equal to saying that something “works with 100% certainty”. It reminded me of a Linguistics study about the role the way our language influences risk perception; I couldn’t find a proper paper online (the experiment was Rundblad’s) but, if I had to transpose its results into the driverless cars topic, I would say that when we read or hear someone say “the self-driving car is safe (or works properly)” we tend to understand that “the self-driving car is always certainly safe”. I mean, I guess Elon Musk must be a clever one, and, apart from being a great entrepreneur, a sharp-witted communicator.

    As for police controlling the driverless car to make it pull over, it think it would be great way to prevent the vehicle from going out of control. I am aware of the legal repercussion in allowing the authorities to supervise the car, but I’m seeing it in a different way: what if the driver suffers from some kind of illness while behind the wheel? Wouldn’t it be amazing if the car could detect and recognize the medical emergency and immediately seek help?
    My mind is wandering (again) a little bit, but imagine if there was a way of calling the police anytime someone breaks into a car and tries to steal the owner’s stuff or reaches out to the tow truck whenever the vehicle has some kind of malfunctionality. I am not completely sure about this, but I think I would, let me say, “let the police in”; if the only kind of control the authorities could exert was stopping the car when the driver isn’t able to manage the situation anymore, I wouldn’t mind about it.

    @serenavineis marine pollution is such a relevant issue, thanks for bringing it up! I didn’t know about Folaga, but I read about The Plastic Tide Project which consisted in the development of an AI-fueled drone that could detect plastic waste in the oceans. It seems that the charity that financed the project has ceased and I don’t know what to expect from their team right now. In my researches I found out about the Microsoft AI For Earth project and it’s amazing to see how much we can achieve through a conscious use of AI technologies; you can scroll through their open projects in the fields of smart agriculture, weather forecasting (it deals with the management in response to catastrophic hurricanes) and mapping for species conservation. Here you can read all the topics explored by AI applied to environmental issues in 2018 … Let me say I had never even considered the matter, and the most interesting side of the story is that the expose the risks of applying this new systems to those issues (they mention ethical, control, social and economic risks). As the Earth Institute at Columbia University claims: “In India, AI has helped farmers get 30 percent higher groundnut yields per hectare by providing information on preparing the land, applying fertilizer and choosing sowing dates. In Norway, AI helped create a flexible and autonomous electric grid, integrating more renewable energy. And AI has helped researchers achieve 89 to 99 percent accuracy in identifying tropical cyclones, weather fronts and atmospheric rivers, the latter of which can cause heavy precipitation and are often hard for humans to identify on their own. By improving weather forecasts, these types of programs can help keep people safe”

    I’ll be back later for another couple of things, but meanwhile I wish you a great weekend!
    Jessica

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3474

    Hi everyone! How are you doing?

    What @valentina and @serenavineis have said about Samsung Galaxy Fold made me think about something I read a couple of weeks ago; Huawei has basically taught their Huawei Mate20 Pro to complete Schubert’s Symphony No 8, known as the Unfinished Symphony. The funny thing is that they achieved their goal of completing the melody by using artificial intelligence to analyse the unfinished piece of music and to finish it according to the style of the original composer. This new version took the pitch, the timbre and the meter of the existing music and is now completed with an AI-generated possible conclusion to it. The final orchestral score, which was performed live on February, 4 at the Cadogan Hall in London, was then arranged by Lucas Cantor, who took the AI-generated music and filled the gaps to make it possible for an entire orchestra to play it. Here you can listen to what I’m talking about.

    You might already know that the Mate20 Pro features Master AI, which is an AI intelligence developed to bring photography to a new level. This new system recognizes what subject is in front of the camera and enables different scenes and optimal parameters to make the resulting image look flawless. What @valentina says about Samsung Galaxy Fold partnering with Adobe and Instagram is true: we are all aware of how widespread the craze for photo and video editing app is (I am a victim of the trend too: my graduation thesis was all about video editing applications and the kind of user experience each of them could guarantee), and I’m pretty sure the camera specifics are the most looked at when someone buys a new smartphone. Being a video editor myself, the prospective of having a built-in version of Adobe Premiere Rush already installed on my device was quite appealing (consider that the standard version is a bit limited and rather heavy for the device to work smoothly when it’s running). That’s a smart move from the Samsung side but I’m still quite convinced that Huawei will take over the market way before Fold and the whole S10 series catch on. Huawei’s name is undeniably associated with the top-notch brands when it comes to cameras and lenses (Leica) and the new AI system, I think, is way more reassuring than the Instagram filters options advanced by Samsung.

    What do you think about it?

    Have a nice weekend!

    Jessica

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3468

    Hi everyone!

    The “This Person Does Not Exist” website @peppuz posted reminded me of a French artist, Raphael Fabre, who got an ID from the government by using a 3D model generated through computer graphic softwares instead of a photo of his own face. You can read more about what happened here, but it made me realize that AI algorithms, like the one that website uses (GAN, generative adversarial networks, which consists in two networks competing against each other billions of times to refine their image generating skills until they’re efficient enough to generate pictures called deepfakes), could actually end up being used for illegal purposes. Some people have put forward the idea that GANs could feed the next art movement, which sounds pretty crazy (how can you call a computer an artist?), but it’s not too far from being reality: Google DeepDream, for example, uses the aforementioned AI algorithm to fuse patterns into images portaing faces and many artists have already hosted exhibitions featuring AI-generated artworks (some are listed here). As cool as it might seem, not all that sounds innocent is actually harmless: i’ll show you a video generated through GANs which exposes the scary side of AI algorithms:

    Here you can hear Obama saying things the real Obama would never say; this is just one example of AI serving the purpose of spreading fake news and that was the alarm bell that brought to the development of another algorithm that could detect the blinking rate of people in videos like Obama’s. In AI-generated contents the blinking rate is usually lower than normal so you can be rather certain that what you’re looking at is fake; but what happens when you don’t have movements to track? What if the incriminated subject is a still picture?

    I reckon thousands of criminals would counterface their identities by generating pictures like those from “This Person Does Not Exist” and if you consider that a random guy managed to have the French government validate his identity on the most official document (which now presents a “virtual” photo instead of his own), what would happen if the formula was made available to the public (which, by the way, I think it already is; I found this on GitHub but I think it would require too powerful computers to be actually accessible)?

    @davidetoniolo I would really back your political debate app! Still you would have to come to terms with the Filter Bubble issue; the filter bubble is the online mechanism of information polarization, meaning that social networks (and Google’s SERP) tend to show users what they like, giving the feeling that their own idea is the correct/most validated one. It’s exactly what we call the echo chamber effect, consisting in people surrounding themselves with other people with whom they share the same opinions.Another consequence of this kind of process is the spiral of silence: when all your friends, your family, your colleagues, expose opinions you don’t agree with, you tend to shush yourself and keep your minoritarian idea to yourself. In the real world every one of us interacts with different people in different contexts, so it’s quite easy to hear more sides of an issue, but when it comes to social networks it’s very hard not to get convinced that what we think is right no matter what; in this case, if we spoke our mind, there would be a plethora of users hiding behind the keyboard ready to strongly reaffirm their ideas so we are pushed to stay silent.

    Thanks to @danielafiorellino for sharing those videos; I watched them both and I even laughed at the idea of implementing an eject button (even though I read the comments and someone seemed to be pretty convinced by that). A comment says that isolated accidents are likely to decrease but the risk of something terrible happening to everyone’s car at the same time would increase; I hadn’t valued that option before, because in my mind, I don’t know why, it always rhymes with the kind of The Day After Tomorrow post-apocalyptic scenario we used to see in film a bunch of years ago. I can’t easily get along with such a dramatic vision because I think, as we approach the new technology, we will progressively face small issues and prevent catastrophes from happening. Maybe I’m a little too optimistic but that’s the way it works for me …

    I also wanted to show you an article about a company launching an artificial intelligence based software to find the perfect flatmate and put an end to horror stories about the people we share the flat with.It just made me giggle at first, but it seems like it could eventually be a great formula for offsite students; the company plans to apply the algorithm in the commercial sector — such as retirement villages but I think it would work great even in matching dorm rooms when it comes to booking a shared room in a hotel. It’s not really about tourism, but I wanted to share it with you anyway!

    Generally speaking I want to link you an article about Stitcher (a Spotify-like app that collects podcasts about dozens of different topics) teaming up with Vox Media to release podcasts about AI and privacy issue; they’ll likely to be made available during next summer.

    Have a great day!

    Jessica

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3438

    Hi everyone,

    @davidetoniolo I appreciate your response but I’d like to point out that the MIT’s system I have mentioned, RF-Pose , does not even take into account the privacy matter, since it only detects movements through wireless signals, hence there’s no recognition of the person being detected and therefore no privacy-related scare to be considered. Given that I don’t think driverless cars would be the future of private transportation, I think your three options are a bit limiting; my point was to find an alternative to what has been brought up in this forum before. I am not a technician, that’s why I called my suggestion “naive“, but I was trying to analyze the situation from a different point of view; you are stating that the car cannot aware of a pedestrian before it comes closer to the intersection, but what if we found a way to detect any human and non-human presence while the car is approaching a streetlight, an intersection or even while it’s simply driving through the streets (I’m referring to pedestrians crossing the street outside the crossing area)?

    Something like RF-Pose, which would be integrated in the car and would work by itself (there’s a video in the article about RF-Pose that shows no need for additional devices apart from the RF camera), could, in my opinion, alert the car whenever someone, even a dog that rushes out of a car parked by the sidewalk, moves in ways that could predict the intention of crossing the street. Apart from that, nobody can ever grant that you have 0% chance of stumbling into a situation with unavoidable deaths even when it’s you behind the wheel, even though I’m aware that when it comes to technological innovations we all look for certainties rather than high percentages of success. The point is: when it comes to making predictions we are usually imprecise and quite biased by our perception of the world around us, we are built to react; an efficient machine, on the contrary, could make valid and precise predictions before getting to the point where a solid reaction is needed.

    Also, about what you say regarding the Moral Machine Test, which by the way I really enjoyed taking: if the car cannot predict your presence (but, as I said before, the car I imagine hitting the streets would be able to make predictions, rather than to simply react to situations) but has to react as soon as it sees the pedestrian crossing the street, what would it do? Still I’d consider this a matter of predictions: I imagine the car analyzing the scenario, making calculations about the current risks of getting involved in an accident, considering different options (maybe even simply stopping) and acting as a consequence, all of this in the blink of an eye.

    Clearly I’m dreaming a little bit, because I can’t even imagine how expensive a car that powerful would be, but my point is to picture a way for technological innovation to be a part of our everyday life. I just think the tools to make that happen are out there and ready to be tested.

    Of course my personal opinion is that driverless cars wouldn’t drive drunk, nor they would U-turn where it’s avoided to or text while driving, and they would totally respect speed limits (i am taking into account the most common causes of road accidents, not even considering distraction); for as much as I trust myself (and I don’t trust myself, that’s why I don’t drive that much) and other people behind the wheel, I don’t know whether I’d still stubbornly consider my driving safer than a highly-predictive machine’s

    NB: Not saying that I would support the massive diffusion of driverless cars, just sharing a different point of view =)

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3432

    Hi everyone, sorry for having been a little absent lately

    I (finally) took the moral machine test and, as soon as I began judging, I found it was not what I expected it to be. To be honest it made me raise loads of questions about whether choosing the most acceptable outcome is an actual moral matter; let me explain what I mean: when you have to choose between hitting a wall and killing the passengers on the car or turning left/right and killing the pedestrians, you are taking for granted that someone is going to be killed at all. Now, I acknowledge that the test aims at demonstrating that the same scenario might lead to different outcomes depending on the person who is judging it, but don’t you find it a little bit too polarizing?

     

    I was searching for some news and I found an article about AI’s principles which mentions what Google stated about its own AI applications not pursuing:

    “Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints”

    Sundar Pichai, in listing Google’s AI principles, clearly declares that any kind of application that might cause harm won’t be made effective. I was thinking, while solving the Moral Machine dilemmas, that if you follow the principle of “unharmful technology first of all”, you don’t even get to the point of deciding who to kill and who to save in case of accident. The benefits of a self-driving car that wonders about who’d better be killed would not definitely outweigh the risks of having it on the streets. Still, I found the test too polarizing because, if it’s true that AI should prioritize justice (not equity, nor equality according to the article I linked before), neither of the groups of people involved in the test’s scenarios should be injured because the core cause of the potential accident would be made harmless: for example, both the zebra crossing and the wall could be signalled miles in advance (I’m not  too aware of how this kind of technology works, but I’d naively compare that sort of cross-device communication to NFC or IFTTT, both of which I use to silence my phone whenever I enter my room after 11 pm and turn off Wi-Fi when I exit my house). And what if someone crossed the street outside the zebra crossing area? Then a solution like RF-Pose, which detects people’s movements through walls, could be used to alert the car whenever a person or an animal is standing within a specific area around it and stop it effectively before anything bad happens. With such a technology it would not be even about recognizing a green light or distinguishing an old man from a young girl, because the car would already know that something it can’t directly detect is happening, and it would be already prepared to face it (many streetlights, for example, are programmed to turn red whenever a car exceeds the speed limit, wouldn’t it be easy to make the car and the streetlight system communicate then?).

    This to say that we should try and imagine the technicians working on self-driving cars turning the issues we consider “moral” into technological challenges to face before driverless transports hit the street. That’s why I found the Moral Machine test too polarizing, because it is generally not considering a third way to face the problem, while the car will try and do its best to avoid any harmful consequences.

    That being said, both the test and the results shocked me a little bit and I found it amazing to realize that, in scenarios like that, we are always prone to saving ourselves without considering the outcomes that might affect other people involved, but I guess that’s just how the human-being machine works …

     

    @gianlucabelloni thanks for sharing the news about Facebook’s interoperability and thanks @peppuz for making it clear that nothing will really change for the users. I have being doing some researches about this, because of a university project about dark social analytics (I’ve brought it up before in the forum; it’s not self-promotion, I swear, but I’m linking it in here so that you can see what I’m talking about since the phenomenon is not being deeply investigated in Italy and we have basically translated most of the articles about it that we could find online). Zuckerberg claimed that the idea of merging those platform is related to pursuing a higher security level and extending Whatsapp’s encryption to Messenger and Instagram. The unification itself is not much of a revolution in terms of data sharing (I mean, the three application already basically belong to Facebook) but I’d like to share a point with you: at the moment, nobody can really see what you write in your conversations except for the person who receives your messages, but there are ways for publishers online to know if you are sharing their contents via private instant messaging platforms. Don’t be alarmed: if you copy and paste this website’s URL into Whatsapp and send it to a friend, nobody can really know it’s exactly you who shared the link, but the webmaster could potentially know someone has copied the link, from which browser they’ve entered the website and so on. Now, what I thought when reading the NYT article about Zuckerberg plans for Whatsapp, Instagram and Messenger was: what if they could match my public profiles on Instagram and Facebook with the contents I share with my contacts? 

    I bet this would all turn into massive advertising campaigns and there would be no other purpose to do so, but what do you think about it?

     

    Also, I would like to share with you a game that I play sometimes, which made me question many things at first, but it’s turned into a funny way to trick the system: have you ever noticed that Instagram customizes the sponsored posts on your feed according to what you talk about in your Whatsapp chats?

    I don’t know whether that’s supposed to happen (because we implicitly give our consent), but I was once talking about strollers with a friend of mine; the topic had never entered our Whatsapp conversations before, nor I had ever googled it (I have no kids, no friends with kids, not even a family member who is having a baby), so I found it a little weird when, after having sent her a bunch of messages about buying a stroller for a friend who would act childishly, my Instagram feed and the my stories were filled with ads about strollers (mostly Cam and Chicco). We tried again with pasta sauce, ski passes, mattresses and even vacuum cleaners and it happened all the time.

     

    I’m ending the post here so that you don’t have to read an entire book about my opinions (sorry guys). Hope to hear from you

     

    Jessica

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3407

    Hi everyone,

    I guess this is what @davidetoniolo might be talking about; I find what Smith claims to be very fair, it’s quite the point I was trying to make in my last post. The letters mentioned in the article deal with the way those companies are approaching the new technology: they call it the “break-then-fix” way, regarding the fact that most of the facial recognition technologies that are being used can’t really vouch for their accuracy at their first applications. The people who protested against the selling of FR systems mostly worried about how the next generations might feel as if they were tracked while going, for example, to their place of worship. Now I’d like to focus on the concept of “confidence threshold”: I haven’t really thought about it before reading the article I linked before, but I think we might be missing an important point. We’ve been discussing the usefulness and the harms of FR technologies looking for black-and-white certainties; when we claim that facial recognition technologies recognize, identify, scan people’s features, we assume that those processes lead to 100% unequivocal results. But what if we were given, let’s say, a 70% confidence threshold? The number of false positives and mistakes would wane significantly.

    I am not, in any way, stating that considering a lower confidence threshold would make FR safer, especially when referring to a public-security related usage of the technology; for sure we would be more aware of what facial recognition can do and to what extent we should consider it trustable. I think that a widespread application of FR still needs to be heavily regulated privacy-wise, but if we all stopped considering its results as biblical truth we could re-evaluate the positive side of it and maybe integrate FR systems in advanced stages of individual’s recognition processes (meaning that facial recognition would come after many human-driven steps of identifying people). What do you think about this?

    Going back to @marcopastore point on trains, I’d like to share my researches about two FR-related innovations, one I support and another I completely disagree with: the former deals with FR technologies being used to replace train tickets (both the BBC and the Daily Mail wrote about it) while the latter is again another system to prevent terrorism on public transport. Now, I reckon that Trenord’s databases (as well as those of any other public transport administration system) already contain most of our private data but they’re completely missing out on occasional users, without any kind of subscription. This FR system would be a great way to prevent people from getting on the train without a valid ticket; clearly databases would be enriched with FR-related data and unsubscribed users would need to be registered as well, but I think we could give away a little more data (which is what we already do with social media, in my opinion) in order to get a better public service. The other innovation is called FaceFirst and is a member of the US National Safe Skies Alliance; I was kind of shocked when I realized you can even book a demo yourself if you want to check this system out. We might be worried about big companies installing their softwares for facial recognition while private smaller companies are already on the market. What I found to be interesting, though, is their “Privacy” section which clarifies the matter of privacy as for their software.

    I appreciated what @danielafiorellino said about the annoying queues at airports. On my side, I immediately thought of concerts; I am the one who loves the queueing experience before concerts, but it’s clearly annoying and debilitating for many people. Anyway that’s not what I want to talk about; I bet you all know what happens with the so-called “bagarini” when tickets go on sale and the situation is escalating quickly, with ticket prices rising and more people complaining about how bad ticket sellers act. The Verge wrote about Ticketmaster (the guys who create a virtual queue of customers that literally wait in line to get to buy their tickets) and Live Nation investing in a technology called BlinkIdentity to replace the ticket with facial recognition-driven access to live events. Sure the companies would need to develop a database of all their concertgoers’ faces, and there would come the privacy issue, but it would aid in preventing secondary-ticketing which is impacting on the live music industry.

    Last point for today: I am a CES (Consumer Electronics Show) enthusiast, meaning that I tend to easily charmed by all of the innovations that are presented at the convention in Las Vegas. In this year’s edition the company FaceMe presented a system of facial recognition that could recognize the emotional response to a product and customize the advertisement according to a user’s degree of appreciation to other products. Wonderful marketing move, still this video, which shows how the system works, expresses some concerns as for the recurrent privacy issue. Do you think it would be cool to receive customized advertising according to what we (or at least, our bodies) seem to like?

    Have a nice day,
    Jessica

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3376

    Hi everyone,

    “Improving lives”, that’s the first line you can hear in the video about facial recognition systems in China. Honestly, I can’t really see how that kind of application of such a technology could be considered an improvement. To me verifying people’s identity through CCTV does not sound like “safety” at all; instead it feels as if the government was trying to suppress both thought and self-expression in order to maintain a totalitarian regime, which is clearly why we are all complaining about the potentially dangerous effect of FR systems spreading throughout Europe and America. You can also hear the narrator say that if you’ve done nothing wrong, you have nothing to fear, but I’m always treasuring the idea that there could be no progress without some kind of protest, which is what the Chinese government would mark as “wrong”.
    Privacy lies on the other side of the matter, with citizen being profiled without explicit permission; personally, I would never tolerate anything like that! The government vouching for a supposed safer country is not something I would trade my private life for.

    @francescatomasello and @davidetoniolo mentioned Amazon dealing with facial recognition systems for Amazon Go; strolling through the news, I read that Amazon’s own facial recognition system, Rekognition, has been blamed for not being accurate in identifying females and darker-skinned people, resulting in poor performances when analyzing faces and expressions. A group of MIT researchers suggested that the company should improve their algorithms and remove any bias before selling it to police departments (now, imagine the Chinese government’s systems making the same mistakes as Rekognition’s, wouldn’t that be a complete disaster?). In fact, though the biased analysis might only affect image comparison, for example, when a user is making researches online, Amazon claims that Rekognition could also serve as a tool to improve public safety, which, considering its flaws, would be awful. Note that the Chinese idea of turning a country into a safer place through face identification systems is not that far from the western culture …

    As for the privacy issue, I’d like to share with you an article I read on Esquire; Kate O’Neill, an expert in data-driven technologies, tweeted about the 10-years-challenge craze over the last two weeks wondering whether people posting pictures of themselves in the past could help “train facial recognitioon algorithms on age progression and age recognition”. It might easily be mere paranoia, but if things went that way, we should all be aware of what posting pictures on social networks could actually mean. If facial recognition systems worry us, for they might be used for the wrong reasons, then we should start acknowledging that we might be unwillingly giving away crucial data for facial recognition systems to train on. Given that what the Chinese government is doing is clearly an unjustifiable invasion of privacy, I’d like to hear your opinion on what O’Neill claimed on Twitter.

    With all the negative talking about facial recognition systems, I was bound to find a valid application of this technology without most of the side effects that we have expressed so far (or at least a usage that could make the invasion of privacy more tolerable). I read about facial recognition being used in hospitals to both classify a patient’s level of pain in order to manage chronic pain and medication dosage, and retrieve precious life-saving information about the clinical history of unconcious patients in the emergency room. Researches from Cambridge University claim that this technology could aid injured victim identification after large-scale disasters. Moreover the US National Human Genome Research Insitute has been training facial recognition technologies to diagnose rare diseases. A software called DeepGestalt has been praised for its accuracy in identifying facial characteristics linked to genetic disorders. These are only a couple pf examples of fair usage of facial recognition systems and I think we all would be glad if those technologies could meet a widespread application in everyday life.

    Last but not least, I’d like to give a shoutout to the Imaging and Vision project that’s being developed in the DISCo department in Bicocca. Most of their work if focused on improving face, image and object recognition in different field of studies; you should definitely give it a go if you’re interested in understanding the potential of this technology.

    Looking forward to hearing more from you,
    have a nice weekend!

    Jessica

    Jessica Amianto Barbato
    Participant
    @jessinthebox96
    #3294

    <span style=”font-weight: 400;”>Hi everyone, I’m Jessica Amianto Barbato. I’m a 22-year-old student graduated in Psychosocial Sciences of Communication and currently enrolled in the first year of Theory and Technology of Communication. I am among the founding members of Radio Bicocca and I’m currently taking care of its website and digital contents, writing posts and shooting videos at events.</span>

    <span style=”font-weight: 400;”>If I’m not studying or working, you’ll likely find me playing an instrument, writing, at the cinema or at concerts.</span>

     

    <span style=”font-weight: 400;”>I’m quite a creative mind and I’m always out there searching for something new to do; I fear I might be somehow allergic to spare time: I tend to fill any single free minute in my life with interesting activities to try and I am very curious about probably anything!</span>

     

    <span style=”font-weight: 400;”>Sorry for having joined the forum this late!</span>

     

    <b>ABOUT THE TOPIC</b>

    <span style=”font-weight: 400;”>First thing first, I have been doing some researches about AI as a tool to collect data about a website’s audience that shares content through the dark social. In a few words, dark social ( read more: </span><span style=”font-weight: 400;”>https://www.ibm.com/blogs/think/be-en/2018/05/08/marketing-dark-dark-social/</span&gt;<span style=”font-weight: 400;”> ) happens when users copy and paste an untraceable URL in a private chat (e.g. Whatsapp or Facebook Messenger) or on social networks in order to share it with their contacts. Those who manage the website need to find an alternative to the most traditional Google Analytics to get to know their audience and understand their online behaviour. In this article published on The Washington Post ( </span><span style=”font-weight: 400;”>https://www.washingtonpost.com/news/theworldpost/wp/2018/09/28/artificial-intelligence-3/?noredirect=on&utm_term=.55e02a68b9e0</span&gt;<span style=”font-weight: 400;”> ) the author deals with China being more advanced in AI researches than the United States and European countries (@davidetoniolo mentioned data analysis as a fruitful field for AI to operate), and with artificial intelligence capable of creating an artwork that elicits emotions, which sounds amazing to me!</span>

    <span style=”font-weight: 400;”>What I found to be peculiar is the social-network-oriented usage of AI, which deals with the impact of social media on democracy and trustworthy political news on Facebook. According to this, Italy has been involved in the study of a new approach to prevent fake news from spreading online thanks to some sort of reputation-ranking assigned to journalists (anyone thought of Black Mirror in here?). Also, going back to the dark social topic, the article puts forward the idea that AI would be a useful way to track misinformation that spreads through the encrypted Whatsapp environment. An anthropologist claims that many mob killings in India (here is some news coverage if you don’t know what I’m talking about > </span><span style=”font-weight: 400;”>https://www.nytimes.com/interactive/2018/07/18/technology/whatsapp-india-killings.html</span&gt;<span style=”font-weight: 400;”> ) were fueled by the diffusion of fake news through Whatsapp and that, therefore, “</span><i><span style=”font-weight: 400;”>a crowdsourced system managed by human moderators</span></i><span style=”font-weight: 400;”>” could monitor problematic content that users are forwarding each other. </span>

     

    <span style=”font-weight: 400;”>That being said, I know that this might sound as “invasion of privacy” to many of you and that’s where the worries expressed by Sundar PIchai start to make sense: would democracy be at risk in such a controlled scenario? Diving deeper into the matter, would it be too big a loss if what you got was safety? Let me explain, I think I would consider the option of giving away a little part of my privacy if fake news generated crime, but what kind of content would be censored? Would they try to shape my political views? </span>

     

    <span style=”font-weight: 400;”>I found an article by IBM (</span><span>https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/building-trust-in-ai.html </span>) that considers both the negative and the positive sides of AI, claiming that it would be essential to build trust in these systems and to educate people about how they work and how they can benefit from them. I think, in fact, that it is also important to take into account how the public will accept such a huge innovation; what I fear the most is that, once an ethical usage of AI is achieved, let me say, government-side (or even company-side if we consider the huge improvement in marketing that AI could lead to), people won’t be too fond of relying on it.

    <span style=”font-weight: 400;”>We all know that governments are aware of the ethical problem artificial intelligence brings along, but are they considering the possible opposition to the new technology?</span>

     

    <span style=”font-weight: 400;”>Coming to the project you asked us to comment about, I think it’s such a forward looking idea! On my side, I would totally love to be able to actively participate in policymaking but I also conceive a future where people are skeptical about the practical benefits of AI. In my opinion, the only way to build awareness of how AI works is education and shifting the talking from what governments look forward to doing with it to what people actually think and know about artificial intelligence could ease its usage in the future. Clearly I’m talking about educating the new generations who will potentially witness a widespread presence of AI in their life; what could the government do to make people conscious about what happens behind the scenes of artificial intelligence? How can it increase the trust in new technologies?</span>

    <span style=”font-weight: 400;”>Strolling through your comments, I read @danielafiorellino asking whether AI will make us lazy (maybe more lazy than we already are) or even replace us. Those questions made me realize that I cannot really foresee a future that resembles a sci-fi scenario at its core; I struggle to imagine a world where machines take control over humans (my professor always says that machines are stupid and it’s the human component that makes them “intelligent”; I don’t know if AI makes the case, but I kind of back him in saying that we would never give up on being the most intelligent beings on earth). Sure AI will make things easier, but we’ll find a way to value even the smallest of our effort.</span>

Viewing 13 posts - 1 through 13 (of 13 total)
Lost Password

Jessica Amianto Barbato

Profile picture of Jessica Amianto Barbato

@jessinthebox96

active 2 years, 5 months ago