Resolving the conflicts between security and issues such as privacy and free speech next year and beyond in the US was discussed by a panel during the online Web Summit 2020.
One issue highlighted was the growing levels of disinformation across the internet, a problem that is particularly difficult to resolve in countries like the US where the principle of freedom of speech is so engrained. Susan Landau, professor at Tufts University, explained: “The disinformation problem is a unique one to the US; we’re such firm believers in our first amendment that we find it very hard to prescript what kind of information can be easily available.”
Nevertheless, there is the potential for solutions that can tackle the scourge of fake news without necessarily impinging free speech rights. Landau highlighted the approach taken by Baltic countries, largely in response to misinformation campaigns emanating from nearby Russia. They focused on educating their citizens to be much more analytic and careful not to take things they see and hear at face value. “Long-term, that’s part of a solution if we’re going to keep the first amendment,” she added.
However, such an approach is unlikely to be by itself adequate, with Paul Syverson, mathematician at US Naval Research Laboratory, noting that “if all the information you’re getting is only framed by one perspective and it’s a distorted one, or one that is full of misinformation, it’s going to be very hard for you to practice this judgement.”
It is therefore important to place more emphasis on large tech firms to frame misleading content in a certain way, without actually preventing its availability on their platforms. An example of this is labelling certain claims as disputed, as was seen from the Twitter response to some of President Trump’s allegations about voter fraud during this year’s election. Syverson pointed out that “while section 230 [of the Communications Act] does protect your ability to put up content produced by others with impunity, you are free to make this available and police it yourself as best you can.”
Adapting the section 230 legislation to disincentivize tech firms to stop targeting individuals with only certain types of information and perspectives based on their interests could be another consideration going forward, according to Landau.
The need to recognize that physical safety and cybersecurity are becoming increasingly interlinked was also highlighted by the panel. Bruce Schneier, founder and security expert at Schneier on Security, noted that the growing reliance on digital technologies for critical services, ranging from medical devices to electricity plants, is likely to be an increasing target for cyber-villains. “There are a lot of laws and regulation around things that are physically dangerous and computers are going that way,” he said.
This year, the response to the COVID-19 pandemic by government has thrown up a number of new issues in the security-privacy debate, including the use of contact tracing apps and the potential for immunity passports when vaccines are introduced. Schneier acknowledged that most people are appreciative that such measures are justifiable in a time of emergency, “as long as we recognize that it is temporary.”
Syverson concluded on a positive note, expressing hope that the COVID-19 pandemic would serve as a “wake-up call” for people about the extent of misinformation out there, with many of the claims made so contrary to reality that it is impossible to ignore.
One issue highlighted was the growing levels of disinformation across the internet, a problem that is particularly difficult to resolve in countries like the US where the principle of freedom of speech is so engrained. Susan Landau, professor at Tufts University, explained: “The disinformation problem is a unique one to the US; we’re such firm believers in our first amendment that we find it very hard to prescript what kind of information can be easily available.”
Nevertheless, there is the potential for solutions that can tackle the scourge of fake news without necessarily impinging free speech rights. Landau highlighted the approach taken by Baltic countries, largely in response to misinformation campaigns emanating from nearby Russia. They focused on educating their citizens to be much more analytic and careful not to take things they see and hear at face value. “Long-term, that’s part of a solution if we’re going to keep the first amendment,” she added.
However, such an approach is unlikely to be by itself adequate, with Paul Syverson, mathematician at US Naval Research Laboratory, noting that “if all the information you’re getting is only framed by one perspective and it’s a distorted one, or one that is full of misinformation, it’s going to be very hard for you to practice this judgement.”
It is therefore important to place more emphasis on large tech firms to frame misleading content in a certain way, without actually preventing its availability on their platforms. An example of this is labelling certain claims as disputed, as was seen from the Twitter response to some of President Trump’s allegations about voter fraud during this year’s election. Syverson pointed out that “while section 230 [of the Communications Act] does protect your ability to put up content produced by others with impunity, you are free to make this available and police it yourself as best you can.”
Adapting the section 230 legislation to disincentivize tech firms to stop targeting individuals with only certain types of information and perspectives based on their interests could be another consideration going forward, according to Landau.
The need to recognize that physical safety and cybersecurity are becoming increasingly interlinked was also highlighted by the panel. Bruce Schneier, founder and security expert at Schneier on Security, noted that the growing reliance on digital technologies for critical services, ranging from medical devices to electricity plants, is likely to be an increasing target for cyber-villains. “There are a lot of laws and regulation around things that are physically dangerous and computers are going that way,” he said.
This year, the response to the COVID-19 pandemic by government has thrown up a number of new issues in the security-privacy debate, including the use of contact tracing apps and the potential for immunity passports when vaccines are introduced. Schneier acknowledged that most people are appreciative that such measures are justifiable in a time of emergency, “as long as we recognize that it is temporary.”
Syverson concluded on a positive note, expressing hope that the COVID-19 pandemic would serve as a “wake-up call” for people about the extent of misinformation out there, with many of the claims made so contrary to reality that it is impossible to ignore.