Reading 12

Trolling on the internet is the practice of performing insulting or aggressive acts against others online, usually with the intention of getting a rise out of the victim. This can manifest itself in many ways, such as repeated messages (or spamming), threats, or explicit messages and images. As the internet grows in popularity and more people turn to digital media to express themselves, trolls gain in number and power. They contribute nothing of value to the community, and make users uncomfortable participating.

 

We have discussed previously in class the ethical responsibilities that companies have to their customers. In this case, I believe that the companies creating these online communities have not only a moral obligation but also a business obligation to protect their users against trolls. It is in the companies’ best interest to have their users feel comfortable in the community they have created, and trolls are extremely detrimental to the overall health of the community. Most companies implement this through a reporting tool or similar feature, however many victims do not think this goes far enough. I would encourage companies to put more resources into managing their communities, as it is where the real value of the company lies. Allowing more community moderation or providing incentives for helping to improve community health would be interesting and powerful steps towards improving the experience of users. These companies have an obligation to their users to protect them and make their product a useful tool for every single user they have.

 

While anonymity on the internet can of course be abused, I believe that it is a good thing in the general sense. The ability to be anonymous helps protect free speech, especially in repressed populations. It also allows people to embrace their personalities online without fear of retribution in the real world. Furthermore, I do not think that removing this anonymity would be a solution to the internet troll issue. Bullying has been a social issue for many years before the rise of the internet. Despite these being public acts, bullies still exist and exert their fear on victims all around the world. Similarly, people on Facebook have shown a willingness to participate in troll-like actions under their real name, and in view of their entire social network no less. The removal of the anonymous persona from social networks would not prevent trolling, as public identification does not seem to be a concern for a bully.

 

At the end of the day, I do not believe that trolling is a major issue for the internet at large. The severity of trolling depends quite largely on the community and the time moderators and companies put in to creating a comfortable, collaborative environment. While simply turning the other cheek can be an effective strategy for individuals to combat trolls, it should not be their only option. The support of their online community should allow them to handle any “troll-y” situations and continue to comfortably and meaningfully contribute to that community for a long time to come.

Reading 11

I think it is very important to note the difference between artificial intelligence and artificial general intelligence. In the public mind, these are two different names for the same goal, but really most people are discussing artificial general intelligence. There are many different kinds of intelligence, just as there are many different kinds of skills. One person can be good at chess, while another studies art. Both of these people would be considered knowledgeable or intelligent in their respective fields. Similarly, a computer that masters chess could be considered intelligent in its own domain, but it is obviously not generally intelligent; it cannot hold conversations or reason philosophically. Artificial general intelligence is the technological equivalent of human intelligence, while artificial intelligence can also come in domain-specific applications. By this logic, the recent trend of specific artificial intelligences, such as AlphaGo and Watson, are indeed forms of artificial intelligence. These are no mere parlor tricks, they show an ability beyond that of human reason, and exhibit emergent behavior in their choice of actions (look at the second game of the recent AlphaGo vs Lee Seedol series).

 

This of course brings up the question of the Turing test. Alan Turing described a test by which a person has conversations with both a computer and a person, without directly seeing either party. The objective of the test is to determine which party is the person and which is the computer. When the computer is not consistently identified correctly, it can be said to have passed this test. The Turing test does not, in my opinion, indicate any real level of artificial general intelligence. It does however allow an artificial intelligence to show proficiency in human communication. The Chinese room thought experiment does a great job of showing this distinction. Acquiring this ability is an important step in creating useful AIs in the mind of the public, as it will be much simpler to interact with the computer through natural language than hardware devices. This barrier will continue to shrink as we get closer to artificial general intelligence, as users queries and tasks will become less restricted by the intelligence’s abilities.

 

While there is a mountain of work between the current state of the art and a conscious, thinking AI, I believe that we will reach that point eventually. There has been fascinating research done with the C. Elegans, a worm whose entire neural network has been digitally mapped. Researchers were able to take the design of that network, replicate physical senses with sensors, and by passing that information through the network, create realistic action with a robotic worm. While the gap between that neural network and the human mind looms large, this is a great example of the possibilities that artificial intelligence has to one day create an AGI. However, I am not sure that these approaches will be able to create the more intangible parts of consciousness – that of emotions and desire. At the end of the day, to me that is what describes a true “mind” – not just the ability to reason, but want and feel. In short, the human condition describes a true mind.

Reading 10

Net Neutrality, also known as “Open Internet”, is the concept that all traffic on the internet should be treated equally (assuming it is legal). This means that Internet Service Providers (ISPs) cannot slow down access to certain sites, or provide faster connections to other sites. A common example is Netflix, where some service providers have been requiring Netflix to pay fees in order to ensure that customers receive fast access to the streaming site. Net Neutrality forbids this kind of behavior, as all traffic must be treated equally. To ensure this, Net Neutrality proponents want government regulation of internet providers that provides explicit rules on what is and is not legal for ISPs to do with regard to their traffic. Proponents claim that preserving a free flow of traffic on the internet is essential to the good of the users, and to support technology businesses. They are concerned that if traffic is allowed to be prioritized, then this will hurt innovation as the barrier of entry for new competitors, such as technology startups, will be extraordinarily high. It would have been extremely difficult for Google/Facebook/LinkedIn/etc to rise in popularity if they had needed to pay for users to be able to access their services on the internet. On the flip side, opponents are concerned with heavy-handed government regulation. They think that if the space is regulated too much, it will stagnate as companies find it difficult to make business cases for new technologies.

 

Personally, I firmly believe in the concept of Net Neutrality. From a customer perspective, I am paying for access to the internet, not access to particular sites. I do not want to have my access to lawful sites be determined by my service provider. Also, as an engineer who enjoys working on new and interesting ideas, the startup argument speaks to me. I think that the steps taken by the FCC with the new proposed rules are a great way to start enforcing net neutrality. The idea of classifying ISPs as common carriers seems logical to me, as the internet is as ubiquitous today as landline telephones have been in the past.

 

There is of course a danger here of over-regulation of the internet industry. The internet is known for being a free spirited area where anything is possible. However, I think the current proposals strike a good balance here, and provide for a very soft hand in regulation that provides for the bare necessities. As The Verge reports, the FCC has not applied over 700 of the rules for telephone carriers to ISPs. As for the idea that this puts an undue burden on the companies, treating all traffic as equal is actually easier from a technical perspective. It requires no extra work to treat all traffic the same. While it may bring in more profit for the companies to make deals to favor traffic, it also requires more engineering work. I also do not really see the argument that it will hinder innovation. The goal of net neutrality is to allow the internet to continue to grow as it has been for the past 20 years. As content types change user demands, infrastructure will continue to evolve to support this.

Project 03

Our letter to Senator John Kasich can be found on Casey Hanley’s blog: https://caseyhanleyethics.wordpress.com/2016/03/24/letter-to-government-representative-regarding-encryption/

 

I do not think encryption is a right, but rather that privacy is. Encryption is a tool to protect this right in the modern world, and so should be protected in the same way that right is. Just as citizens have free speech to speak out against their government, they should also have encryption to protect their private thoughts from that same government. Not having a safe place to discuss ideas will severely hamper the free exchange and discussion of free thought, something upon which the United States was founded and prides itself. Therefore, the right to privacy, and through it encryption, should be protected by the government for the people.

 

Unfortunately, to me encryption is not that important of a political issue. This is not because I do not think it needs to be discussed (it definitely does), but rather that I see very few politicians who are willing or even capable of having a realistic conversation about it. I should definitely be supporting the idea, and general technical literacy, much more than I do. I believe this is at least in part due to the fact that I, being a young computer science major, spend most of my time with people who are aware of how the technology works and what the implications of compromising it are.

 

While there may seem to be a competition between privacy and national security, I do not necessarily view it that way. To me, it is of a more cyclical nature. New types of privacy will continue to arise as new mediums do, just as the concern over digital privacy has arisen in the past 50 years. Along with those, new technologies will also come to light to protect these new technologies. The law will always be one step behind this process, as is the nature of the judiciary process. Therefore, there will be a continuous game of cat and mouse between the law and the goal of protecting the nation, and the new technologies that are being developed. This process can be seen on a smaller scale in the evolution of “secure” encryption algorithms – as algorithmic flaws are located, new algorithms are developed, and the race begins again. This will continue on both a micro and macro scale, for example, with the eventual advent of quantum computing putting prime-based cryptographic algorithms out of commission. The only way that one side will win is if the other gives up or is forced to stand down. I do not see this happening any time soon – there are passionate people on both sides of the debate, and more importantly, the creation of new algorithms is an essential part of the technical innovation that the United States prides itself on so much. To hold back the encryption side of this chase would be to handicap the technology companies that are doing so much for the economy, and thus give other companies and countries an opportunity to surpass them.

Reading 09

The DMCA, of course, is not encouraging of piracy. It requires the endorsement of anti-circumvention measures such as the Rovi copy protection system. It condemns the circumvention of protections as an illegal act. The safe harbor

 

I believe that downloading or sharing copyrighted works or information can be ethical, but it depends on the situation. I have no problem with people downloading digital copies of works that they already own. This of course applies to movies, but also extends to things such as old video games and emulated games. I do not buy the “sampling” argument as a legitimate excuse, as I believe that it would be a slippery slope. Once you have it and enjoy it, why acquire it legitimately? Where this argument becomes interesting is in sharing content that you own, either digitally or in another medium. No one would have a problem with someone lending their copy of a DVD to a friend, but is providing someone a digital copy also ethical? I believe that it is, as what that person does with it once you provide it is not in the provider’s hands. Assuming that the provider has a reasonable belief that this person will act ethically and return or destroy the media once finished, it would be ethical for them to share it.

 

In high school, we participated in a program where every student had a tablet computer (in the days when tablet meant you could draw on the screen with a pen) to use for classes to take notes and complete assignments. These computers were naturally also used for entertainment, and games could often spread quickly through the student body. Oftentimes, they would be free games, such as Bloons. Other times however, they were paid games. Frankly, I don’t know where I stand on the ethicality of this. I do not think that many students would have paid for the games, so I do not think there is much lost business here. If anything, it was good publicity for the development studios, and I would hope they gained some new life-long fans from the sharing. On the other hand, the software was often distributed in a very care free manner, which didn’t quire feel right to me.

 

At the end of the day, I do not believe that piracy is a major problem today. I agree with the mindset that has been shared by many writers online in the last few years – that those who pirate movies or software are not really lost sales. The people who are pirating movies would not have paid for the same movie, so there is not a lost sale here. Further, they are serving as more advertising for the work, as they will see it and tell friends. Jeff Bewkes, CEO of Time Warner Cable, has a similar mindset, saying “Our experience is, it all leads to more penetration, more paying subs, more health for HBO, less reliance on having to do paid advertising… that’s better than an Emmy” (http://www.forbes.com/sites/insertcoin/2014/01/24/whatever-happened-to-the-war-on-piracy/#773e24e47820). Furthermore, as mainstream streaming services make content easier to access, I believe that piracy will decrease. The accessibility factor is very important for many casual pirates, as a simple download is much easier than a drive to a store or waiting for a package to arrive. I know I personally would be generally more inclined to pay for a work than to pirate it if it were easy to acquire, through a Netflix type service, instead of requiring me to go to the store to find it.

Reading 08

In general, the goal of a copyright is to protect a work by giving its author or creator the ability to control who can use and distribute it, according to WIPO. The main ethical / societal goal of distributing and protecting copyrights is to encourage creators to practice their art. If there were no protections in place, it would be difficult for artists or companies to actually make a profit or other gains off of the work, as it could be distributed freely once a copy was released.

 

I think it is difficult to describe open or closed source software as “better” as that is such a vague descriptor. They both have theirs strengths and their weaknesses. The major benefit of open source software that I see from a quality standpoint is that it is available for public scrutiny. This is useful, for example, in cryptography cases, where having the code available for anyone to read can help prevent bugs that would weaken the strength of the software. On the flip side, the benefit of closed source software is that it is more difficult to locate the bugs that may exist, as the code cannot be easily analyzed. This realistically means that proprietary software can get away with more bugs and other issues in the software, as they may not manifest themselves or be exploited as easily. Ultimately, no software, no matter the methodology or the rigor put into it, can be perfect. I think it is even difficult to choose one as generally better, because to me it really seems to be a question that needs to be posed on a case by case basis.

 

I think the distinction between free software and open source is more of a mindset issue than a classification issue. This is something Stallman touches on lightly in his writings, pointing out that they both describe an almost identical group of software. Of the two licenses mentioned, I actually think the BSD license is the more free. Because the GPL attempts to preserve freedom by requiring people to also release the modified software as free, it is actually imposing burdens on the users of the programs and thus limiting their freedom. This also restricts many groups ability to use the software, who for any number of reasons, cannot release the software they create as open source. If I am trying to create software for good, I would want that software to be useable by as many people as possible, so I would choose the BSD license.

 

I believe that of all organizations, the ones that should be endorsing open source software the most are governments and public organizations. These are groups that, by definition, are for the people. The tools that they create are all intended to help the populace. A great way to accomplish that goal is to also release the code behind those tools. I realize that this may not be a practical goal for all software created, as some software depends on the secrecy of its implementation. However, releasing the source code is something that should always be on the mind of organizations, and acted upon whenever possible. When groups make use of existing open source projects, I think that they have an obligation to give back to that community. This can be simple, such as bug fixes or publicity. It can also go further, such as taking partial ownership or committing extended man-hours to support the project. This is mutually beneficial, as the company is enhancing the software for its own use, as well as supporting the project and building rapport with the community at large.

Reading 07

In today’s world, many technology companies work on the business model of providing services to users at no monetary cost. Instead, they require a price of information, which they can collect and categorize, to use for various purposes. I actually do not have a problem with this concept as an economic model, especially considering the immense value that users often get from these “free” services. This model allows all people, regardless of socio-economic status (to a point) to share in the same valuable products and services, as all people have valuable information to share. It is a little scary how quickly and pervasively this new setup has entered our modern world, but overall I believe that it will open the door to the betterment of both parties. The ethical concerns arise when that data is put to use, such as in online advertising.

 

As a concept, I also have no issue with targeted advertising. It benefits both the advertisers, who will see a much improved interaction rate, and the consumer, who will see adds that are actually relevant to them. I understand that some people are uncomfortable or annoyed with being advertised to in general, and that’s ok. For those who do not shut off the advertising systems, I believe that they would rather see interesting adds than things that have no real meaning or value to them. The ethical issue that arises then is providing adds in an appropriate manner, and using the personal data in an ethical manner. Unfortunately, many advertisers seem to employ various tactics that are designed to force the user to pay attention. This can take the form of a large pop-up ad, an annoying sound, or difficult closure method, for instance. It is unethical in my opinion for advertisers to craft ads that are deliberately disruptive, deceitful, or unintuitive. This is not in line with the expectation of the sites that display these advertisements, which rely on them to maintain a good user experience. If the ads are poor, then the user experience goes down, and the entire value of the site goes with it. Therefore, intrusive ads are a long-term loss for both parties. Therefore, advertisers are obligated to themselves, their providers, and their users, to create pleasant, effective ad experiences.

 

The other ethical concern with advertising is the acquisition and use of data. As I have mentioned above, I do not have a problem with companies collecting data. I do however believe there should be more governance on how that data should be handled, protected, and distributed. The thought of selling this data to other parties concerns me, especially when such a marketplace has little to no regulation. I would much prefer for my data to stay in the hands of the company that collects it, as they are the ones I have made the “transaction” with. Even for only that company, there are ways to utilize this data effectively, as can be seen with the success of Google’s Adwords program. However, I realize that this kind of system puts smaller companies at a disadvantage, as in order to collect meaningful data, you must collect from a large set of users. Therefore, I would like to see more rules applied to keep data that is sold or transferred anonymized to protect privacy. It will be important to have the ability to keep companies accountable, and punish poor business practices in this regard as can be done in other industries, such as the stock market.

Reading 06

I do not believe that companies should be forced, or even consent, to put backdoors into their products, for a number of reasons. The greatest among these is the folly of attempting to create a secure system with a backdoor. It is, by definition, impossible to have a completely secure system that also contains a backdoor. This is because the backdoor can theoretically be accessed by anyone that knows the code or sequence that triggers this. This would usually be the one the backdoor was designed for, say a government entity. But there is also nothing stopping nefarious powers, such as enemies of the state and oppressive regimes, from finding and using the same backdoor for their own ends. This also extends to government-approved or enabled encryption standards, which allow an entity to snoop on the plaintext. It is impossible to create a truly secure code that can be easily decoded by a third party, regardless of intentions. Therefore, the weakening of security, while perhaps useful in specific cases such as this, is ultimately a gained vulnerability in our technology that can be exploited by any third party with sufficient knowledge and resources.

 

The second concern that I hold is that this requirement is far beyond the reasonable reach of the government. For one, it is forcing a public entity to put its own time and resources into creating this back door, for which they will not receive any help or remuneration from the government. This sets a dangerous precedent of putting the burden on public companies to assist when the government requests it. While Apple can of course probably deal with this expense without any repercussions, other smaller companies may not be as financially blessed. I also worry about the precedent that this would have for future cases of private information access. Not only has it been shown that such software can be created, but it would be a simple matter to point a judge in the direction of this case, and thus force Apple to use the backdoor software again and again, despite the FBI’s claims that this will be a one time occurrence.

 

Ultimately, I believe that Apple has a responsibility to protect its users, not to track them and prevent them form misuses of the products they create. I do not believe that the company should put its customers at risk of various crimes such as spying and identity theft simply to prevent another crime. This strikes me as a lateral transaction. To those claiming that people should not be worried because they “have nothing to hide,” I would ask them to review history. No one has anything to hide until the time is right. The United States was founded by men who had plenty to hide from the British government, even though they did not have much to hide prior to that. It is for this reason that they included such protections in the constitution, to help protect citizens in future events. The relinquishing of liberty for an increased sense of security is a scary proposition to weigh.

Project 02

Podcast

 

In my experience, the most important parts of the guide all relate to preparedness. It is absolutely essential that you practice interview questions before hand. When I say practice, I don’t mean simply read the books, but actually do mock interviews on a white board. Learn to talk out loud, and work with your peers to get real experience. This will not only make you more comfortable with the questions that will be asked, but also generally better at communicating your responses and working together with the interviewer. As we tried to make clear in the podcast, an interview is a two-way conversation, it is not simply the interviewer sitting in silence while you write mystic code on a whiteboard.

 

The second piece of the guide that I want to emphasize is the side projects. The kind of work that engineers do in industry is remarkably different from the pure, somewhat sterile problems that are often addressed in academic classes. Having experience working on your own projects will help prepare you for what industry coding will be like, although probably with less structure. Secondly, it gives you great talking points in interviews. Interviewers love to see passion and interest, and having a side project that you are excited to talk about is a great way to cover both of those bases. I feel that it is a lot easier to talk about personal projects than school projects, which usually have a pre-defined goal and requirements. With a side project, you can choose the area that you are interested in working in, and all of the goals of the project, yourself.

 

This leads into the topic of school goals and curriculum fairly well. I appreciate that the goal of a university is not necessarily to help you find a good job, but rather to educate the students as best they can, and to really stretch minds. I don’t think this should change, the benefits of a broad education are well described. However, the teachings and curriculum should at least keep in mind that most of the students will be heading into the industry once their studies are completed. Therefore, while it shouldn’t be the key of the classes, they should keep pace with the expectations that industry has of students. Unfortunately, I do not think the Notre Dame CSE curriculum does this. They way it is currently laid out, students often do not learn the skills they are expected to know for internships and interviews until after the time has passed. I know that I personally have had to teach myself most of data structures and algorithms so that I would be prepared for interviews at major tech companies and be competitive with applicants from other universities.

 

I know that the curriculum is improving, and I am very excited for this. The combination of Fund Comp II and data structures into one class seems to me both logical and easy, and is something I have been pushing for since I took those classes. This will make students much more competitive when applying for internships during the first semester of their junior year, as many technical interviews center on data structures such as binary trees and hash maps. I still think the curriculum could be improved more however. I would like to see the opportunit for students to begin taking coding classes, such as the Script Based Programming class offered to non-CS majors, during their freshman year. This both allows students more of a chance to get a feel for computer science, and allows those who already know they want to major in it to build their skills as early as possible. Overall however, I am very happy with the changes that are beginning to take shape.

Reading 05

The Therac-25 accidents are some of the worst software mistakes we know of. I really feel for the engineer who wrote this software (it has been stated as being only one engineer), as he was working on such a complex system that I think many if not most software engineers would have made similar mistakes. While the physical cause of the accidents was the removal of physical fail safes from the Therac-20 to the Therac-25, in theory the replacement of physical fuses with a software fail safe should be an equal system. Therefore, I agree with Nancy Levenson’s analysis that “To attribute a single cause to an accident is usually a serious mistake” (http://courses.cs.vt.edu/professionalism/Therac_25/Therac_1.html). Ultimately, I see two key issues that led to the engineering errors so essential to the Therac-25’s demise: a poor user experience and poor engineering practices.

 

I will first grant that the Therac was created in a time when interface design was not nearly as good as it is now. However, there were two key issues that let this problem fester. The first is that the error code “Malfunction 54” which was presented every time the problem happened, was undocumented. This in turn meant that operators had no way of discovering what had gone wrong, if anything. If there is no way for the operator to investigate, then they will quickly lose interest and brush it off as unimportant, when in reality this error was an important indicator of what had happened to their patients. The second is that no wait screen or interrupt existed to prevent the operator from doing important tasks while the machine was moving its equipment into position. This is of course what the actual error was.

 

At a more scientific, engineering level, the proper protocols were never followed during the testing and verification of this system. Software tests were not implemented, Fault trees never designed or tested. These oversights skipped important steps that likely could have caught the issues that plagued the Therac-25 (integer overflows? Should be easy to check). This is an issue that needs to be a addressed at a company / cultural level. If management does not encourage employees to follow good practices and leave time for them in planning, it will be very difficult to prevent bugs in the product, regardless of the complexity of the end product.

 

Ultimately, it strikes me as interesting that both of the errors that led to accidents (the interrupt during setup and the integer overflow) could have been prevented from causing harm to patients by a simple physical inspection of the machine prior to usage. As the extreme radiation was a result of pieces not being correctly in place, this strikes me as something that would be very easily to visually confirm and thus prevent deaths. While the software obviously needed to be fixed so as to ensure correct operation, a 30 second check seems worth a human life.

 

As a future software developer, I want to strive to make sure that I follow the correct engineering practices. I have learned in my internships the importance of testing, and studying this problem simply re-iterates that concept. Well-designed and documented systems will create safety and efficiency for both me and those that follow me and work on the systems I do. Large projects need to be approached with future maintainability and testing in mind from the beginning. And, if God forbid the worst should happen, it is important that blame not be placed on an individual. Software is a team effort, through better or worse.