Us again! Michael has kindly let us edit Cyber Weekly again this week (thanks for having us 'stay' a little longer Michael).
The theme we have chosen for this week is 'People & Privacy'.
Since the General Data Protection Regulation (GDPR) came about (made 2016, implemented May 2018) there has been a stronger link between data protection legislation compliance, cyber security (information security in general such as encryption) and user-centric privacy.
While most would generally agree that privacy is a good thing, we often see both ends of the spectrum at the same time: 'rookie mistakes' leading to breaches/fines and over-compliance where organisations take a risk-averse interpretation and send out an email asking people to confirm they are… still happy to receive emails.
This week we gather stories that bring technology, people and privacy together (often ending in fireworks).
This is our second and final week here at Cyber Weekly! We'll be back to your regular scheduled programming with Michael from next week.
The UK's data protection regulator has failed to follow its own advice, admitting a privacy notice for its own staffers – one of its key recommendations for GDPR compliance – remains "under construction".
This story goes to show that even the most data regulation compliant-focused organisations can fall short.
Most, but by no means all, organisations have understood the basic concepts around the updated privacy regulations ushered in by DPA18 and GDPR. However, your responsibilities don't end when you've published a privacy notice, worked out data retention, and put in place a process to handle Subject Access Requests from customers. Internal colleagues are just as entitled to privacy notices describing how their data is processed by their employer and those rights are equal to that of an external user (such as a customer or member of the public).
You can't forget other places where your organisations ends up handling personal data, even subtle areas. Online identifiers - such as IP addresses of users - are another area that we see lots of organisations not considering as personal data.
(An administrative fine from the ICO to the ICO would be amusingly meta, but alas probably unlikely!)
Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands. [...] The teams use internal chat rooms to share files when they need help parsing a muddled word—or come across an amusing recording.
Amazon, Google, Apple (and so on) rely on human helpers to pickup where current speech recognition, interpretation and filtration systems fail. These helpers transcribe and categorise in order to make these systems more intelligent... but sometimes that isn't all they do.
Echos, Dots, Google Home and Apple Siri all listen for activating keywords - for example, "Hey, Siri" - however it is relatively easy for it to 'mishear' a noise/word and assume it is being spoken to.
While unintended activations and unexpected commands can be relatively simply chalked up to technology being imperfect, it may be difficult to forgive the sharing of recordings in internal chat systems for entertainment even when we're assured human helpers do not know who the recording belongs to.
Project Alias, a “smart parasite” that fits on top of your Google Home or Alexa device and prevents it from listening in on your conversations.
If you're now worried about Alexa listening to you but still want to use your Echo/Dot (etc) then gosh do we have a product for you! The 'parasite' will sit on top of your existing device and whisper sweet nothings to it to (in theory) stop it from listening to you.
Using a listening/recording device (from a less-known company) to control another listening/recording device may offer... limited value.
And perhaps require a third device, to control that one...?!
“Where companies rely on consent to process people’s data it is critical that this is more than a box ticking exercise. For consent to be valid, it must be freely given, informed, specific and unambiguous. There’s nothing intrinsically good or bad in cookie technology – what matters is ensuring it’s applied in a way which respects individuals’ rights.”
You have no doubt seen the banners and splash screens on websites that have been on the rise over the last few years, in response to cookie and data protection regulations (namely GDPR). Interpretations have been... varied.
In an ideal world, users are presented with clear, concise and functional options where they can make a fair choice and their experience of the underlying site does not materially change while they do so.
In some cases however these splash screens do not allow users to progress through to the site until they have 'consented' to various opaque conditions. In such a scenario, the user typically clicks whatever they need to in order to access the site and in many cases if they do not click the big 'Accept' or 'OK' button, they can't get in. The complaint against IAB Europe covers one such case.
This is an area which is going to be getting increasing attention in the months and years ahead, especially as more cases like this come up. It is going to expose some interesting areas of debate around what constitutes consent, and in particular informed consent.
Andrew Barker, who works in IT security, scanned the house's Wi-Fi network. The scan unearthed a camera, and subsequently a live feed. From the angle of the video, the family tracked down the camera, concealed in what appeared to be a smoke alarm or carbon monoxide detector.
An undeclared indoors hidden camera is arguably one of the most invasive violations of privacy that one can experience.
Airbnb (according to the article) initially did not do a good job in responding to the concerns of the family but ultimately offered some form of remedy (a full refund). While they clearly should have done better from the get-go, Airbnb ultimately has no physical control (access or audit) to properties listed on their platform and hosts get a large amount of trust placed in them.
Fortunately, these occurrences are extraordinarily rare when you consider the amount of properties let/occupied everyday on the platform.
Mexico-based digital media company Cultura Colectiva left more than 540 million records — including comments, likes, reactions, account names and more — stored on the Amazon S3 storage server without a password [...] on a separate storage server by defunct California-based app maker At The Pool contained even more sensitive data, including scraped information on more than 22,000 users, such as a user’s friends lists, interests, photos, group memberships and check-ins
Permission models, role-based access and anti-scraping measures can all be quite hard to implement but Facebook must expel significantly more efforts to control their user information to avoid further third-party aggregation in the future.
Consequences to data subjects after a data breach can be hard to quantify (and even harder to attribute) but the personal metadata exposed in these breaches can be used for all sorts of privacy invasive targeting from SPAM through to personality (such as political) profiling with little protections and even less recourse.
[The Home Office staffer] failed to use the "blind CC" box on the email, revealing the details of other applicants.
A simple and innocent mistake allowed 240 people to see each other's email addresses (and understand they were all being contacted by the Home Office).
(Joel) On face-value, the breach 'consequence' is minimal and while email addresses are personal data, one would consider it pseudonymised (where the receiver can't readily identify who the other people are from just the visible email list) unless the email list was
"Firstname LastName" <firstname.lastname@example.org> or
<email@example.com> - but the article doesn't clarify this.
Hopefully the root causes (plural) analysis will include that technology can be used to help humans, including a pause or warning prompt that lots of external people are being sent an email, or that the number of recipients is quite high - both allowing the human user to reflect on whether this is the right thing to do before clicking 'Send'.
The unredacted list, which included addresses of 203 alleged gang members, was leaked after being emailed to others by a Newham Council worker. Investigators said some on the list had been "victims of violence", but it was "not possible to say" if the attacks had been a result of the breach. ... The Information Commissioner's Office said it was "unnecessary, unfair and excessive" to share the unredacted version with so many people and that the risks "should have been obvious".
Data breaches involving usernames/passwords, other personal data or credit card data are bad but it is extraordinarily rare for these to lead to physical harm or loss of life.
The £145,000 fine levied by the Information Commissioner makes little difference in protecting individuals from harm but perhaps it may serve as a wake up call that data protection and privacy must remain user-centric in spirit and not compliance-focused but at the 'end of the data' is a human being.
"An ICO investigation found that Bounty, a pregnancy and parenting club, collected personal information for the purpose of membership registration through its website and mobile app, merchandise pack claim cards and directly from new mothers at hospital bedsides. But the company also operated as a data broking service until 30 April 2018, supplying data to third parties for the purpose of electronic direct marketing.
Bounty breached the Data Protection Act 1998 by sharing personal information with a number of organisations without being fully clear with people that it might do so.
The company shared approximately 34.4 million records between June 2017 and April 2018 with credit reference and marketing agencies, including Acxiom, Equifax, Indicia and Sky."
(Jon) :facepalm: Actually, I find this a particularly grotty example, due to the way that Bounty collect data from in-hospitals from new parents. That's a time when everything's a blur, and you barely know which way is up. Making an informed consent decision about marketing and data collection in this sort of scenario is complicated.
(Joel) The data sharing took place under the previous Data Protection Act, so the maximum possible fine would have been £500,000 however the ICO did not go to the limit on this occasion.
Privacy is worth money, and on this occasion a pregnancy and parenting club decided to sell it for monetary gain when it was unclear to its members that it was going to do so.
This may also act as a wake up call to organisations who buy data to enable marketing to place a strong onus on data sanitisation and place more assurance that the data they have received is being sold to them lawfully, but also fairly.
When some people signed up on Facebook, instead of getting a verification email or a code sent to their phones, they would instead get a prompt to enter their personal email's password to verify their new accounts -- essentially giving login credentials to the social network. [...] "...we understand the password verification option isn't the best way to go about this, so we are going to stop offering it,"
Shortly after Zuckerberg blogged about his privacy-focused vision it surfaces that Facebook have been asking users for the passwords to their email accounts, so Facebook can verify it.
Into the future I hope Facebook will avoid doing the complete opposite of a good idea and simply send an email to the email address (as part of a positive validation loop) the next time they need to check an email address is working and appropriately owned.
Location-tracking apps can be creepy, handy or romantic – it all depends who’s watching you
(This Cyber Weekly has been a bit doom and gloom so far but don't worry - things are taking an upward turn!)
Location-tracking can make our lives simpler, better and safer when used (and secured) well: whether it is sharing your location via the Uber app (so your friends believe you're actually 5mins away and not still in bed), trying to find your lost iPhone or tracking a loved one's journey to make sure they are going to get home safe.
(Joel) I mostly use it to see how far away the Deliveroo rider is.
"I will panic correctly!
I acknowledge that I will probably cause a security incident. When we’ve been hacked really bad, I’ll direct my calmest and most composed version of panic towards our security group" ... This is a minimal policy drawing from ISO27002, but articulated in a way that could actually be understood. By being a bare minimum policy, it should be applicable almost everywhere. Add what you need based on the maturity of your company, like vulnerability disclosure or regular third party audit requirements.
Writing security policies can often be hard, and a good example of 'fighting the last war'. Often when you read policies, you can almost chart the different incidents and near-misses that organisations and the author have experienced, with mitigations for these events being codified into the policies.
Worst yet, the policies can often be written in pseudo-legalese - presumably based on an implicit belief that this is a serious document, and serious documents need serious words, and that if it's simple no one will take it seriously. But, if it's not simple, no one will understand it, and if no one understands it, then they definitely won't follow it, and if they're not following it, why are you bothering to write it?
I (Jon) had a conversation with a security person recently where they openly admitted that their organisation's security policies were confusing, but that this was 'ok' because it meant that if someone did something wrong then the policies would 'probably cover what they did and we can get them for it'. What a horrible way to think of your staff, and what a terrible way to help them to be secure!
This sample policy shows that you can make information security easy to understand, without losing any of the important and serious details. Like the training course we linked to last week, thinking carefully about your audience, and good ways to communicate with them, really helps in terms of making security and privacy accessible and actionable.
Hidden in plain sight, GCHQ’s secret former London office has been revealed as a nondescript postwar block close to the centre of power in Westminster
(Joel) I have had a fractional sense of smugness to walk down Palmer Street and know (summarily) what the unsuspecting members of the public did not... but I guess now the jig is fully up there is little to distract me from the realisation that I seem addicted to Starbucks coffee.