Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen. And this most recent attack represents a completely unintended but disconcerting link between the two most serious forms of cybersecurity threats in the world today – nation-state action and organized criminal action https://thenextweb.com/security/2017/05/15/how-one-guy-stopped-the-wannacry-ransomware-in-its-tracks-after-it-spread-to-150-countries/#.tnw_eSDCtyZ1
The randsomeware known as WannaCry has been in the news lately because of it's usage in the latest global cyber attack affecting government agencies around the world, particularly in the US, Russia, and the UK. Reportedly, the NSA was the first to discover the vulnerabilities and has kept it a secret in order to use in cases of cyberattacks. Hackers then stole this and used it to hack into thousands of old Windows computers in different important organizations. Many see this as a problem with the NSA and the government, because of situations like this were hacks and leaks cause major privacy and security issues. I believe like many others that the NSA shouldn't be collecting secretive information about exploits in our technology, especially technology that is common and found in computers, ATMs, smartphones, etc. Hackers are more capable than ever and it's only a matter of time before leaks happen, causing sensitive information to get into the wrong hands. Keeping information about vulnerabilities to use as a cyber-weapon sounds like the actions a paranoid person would take. While normally it would be good for the NSA or some other agency to have this information in order to catch terrorists, for me personally these cases are so rare that it's not worth them keep such dangerous secrets. The CIA or NSA are large agencies that are still vulnerable themselves and especially so because they are prominent targets. By not stockpiling these exploits, they wouldn't have so many hackers trying to, and in some cases succeeding, in gaining information they want. I don't hear about these agencies protecting us by using these exploits all the time, so I think it's safe to assume that they use these exploits rarely in order to protect citizens against terrorists. By stockpiling these exploits, agencies are creating more of a risk of causing worldwide damage if they were to be hacked. As long as hacking and leaks are still occurring, not only are Americans feeling unsafe, but the rest of the world as well. I don't think that hoarding these secrets are helping as much as they think, and it's doing more harm than good. I don't want the overconfidence in our security to lead to the damage of organizations and people's privacy worldwide.
0 Comments
There's talk of immortality being reached within our lifetime, but what is the cost of it?
While doing my research into autonomous life support, I found that people's attitudes toward life altering procedures with technology can differ a lot. If someone were to become immortal because of the assistance of autonomous technology, would it affect society and our humanity? Consequentialist ethical frameworks could reasonably argue that by allowing such technology to rule our lives to the point where they are keeping us alive, we may lose our humanity. By being alive longer than expected, be it 10 years or 100 after a fatal accident or life-threatening disease, would we stop being human? I think that one of the most important and fundamental aspects of being human is our mortality. By preventing our deaths, we may lose touch with reality. Things that used to mean a lot to us, relationships, hobbies, societal duties, etc, may lose meaning when we live for many years after when we are supposed to die. If people who were supposed to die at 55 end up living to be 150 or more, there would be definite changes to how our society functions. We do everything because we are aware of our mortality. We invent, create, and express ourselves because we know that we won't be able to once we die. So we do everything we can within our lifetime because our time will be up sooner or later. By changing this fact, I would go so far to say that the consequences to our society would be harmful. I think that by changing something fundamental to our existence can't go without consequences. I don't know what the extreme would be, but I imagine our society might be diminished. Our dependence on technology to postpone natural death would cause us to lose our passion, our deep emotions, our love of life. Because why be passionate about something that will pass in the blink of an eye in such a long lifetime? I think our perspective on life and time would change for the worse. Immortality is really not something we should strive for because death in something that we as humans have been dealing with forever, and changing that could destroy society as we know it. Machine Intelligence and AI are become more advanced but also more complex and mysterious as time goes on. As Zeynep discusses in her Ted talk, machine learning systems are being developed to be able to predict things with high accuracy. These systems can be used for hiring individuals or for determining who is more likely to develop illnesses like postpartum depression. These systems are also mysterious because we don't know why they make their predictions. Since it's different than traditional programming, even the creators are not able to look into the "thoughts" of the system to see what it's thinking, a "black box" as mentioned in the talk.
I have some questions that no doubt come up from hearing about this technology. Should we be trusting decision-making to computers that we can't control or understand? There is risks that come with implementing such technology. These systems, while being very efficient, can also be problematic because they lack the values that people have. For example, the system could dismiss people with a chance of developing depression in a pool of potential hires for a company. Depending of the values of the company, they might agree or disagree with this. Money versus the workers seems to be the values here in this case. Not every company is ready for a machine to make value judgements for them. And as Zeynep mentioned, implementing this technology could unknowingly affect companies and individuals in the long term. I believe that we shouldn't be using this technology for certain decision making mainly because while it does make predictions, we should take them with a grain of salt and only use them to aid us in decision making. As said before, we can't understand how or why the system comes to it's predictions, which makes them somewhat unreliable. And while humans are prone to corruption and bias, they have a level of logic and thinking that computers have yet to fully match. |