Latin America the 'capital' of Emotet Trojan activity
By Dev Kundaliya | News | 14 March 2019
Recorded Access tracked the locations of command and control servers of the most virulent Trojans over a 39-day period
Latin America is the surprising ‘capital' of remote access Trojan (RAT) activity, according to researchers at Recorded Future's Insikt Group, with the Emotet Trojan particularly widely used by cyber criminals in the region.
Recorded Future found active malware controllers for 14 malware families during the 39-day period of their analysis earlier this year.
But rather than tracking all malware controllers, they focused on just three particularly virulent forms - Emotet, ZeroAccess and xTreme RAT. Emotet was found to be the most prevalent in Latin America, while ZeroAccess and xTreme RAT more commonly run out of South Korea, US, the UK, or India.
One of the most active Emotet command and control centres, though, was found to be operating out of South Korea.
The team identified 26 organisations with hosts infected with Emotet, most of which were also located in Latin America. These organisations were found to be operating in the finance, automotive, energy, retail, construction, entertainment, technology and logistics sectors.
Infected xTreme RAT hosts were identified across Europe, Middle East, South Asia and East Asia, with organisations compromised including companies in gaming, telecoms and IT sectors, as well as an industrial conglomerate.
The research team also identified a single instance of a victim organisation communicating with a ZeroAccess Trojan C2 active on a Romanian IP address. The victim organisation was an IT firm based in East Asia.
A RAT is malware capable of enabling third parties to take full control of a computer system via a remote network connection. RATs typically include a backdoor for administrative control over compromised systems.
In most cases, RATs are downloaded accidentally through a user-requested programme or via an email attachment. Once a system is infected by RAT malware, it enables attackers to use the system to spread the programme to other vulnerable systems.
A RAT enables intruders to perform a variety of activities, such as monitoring user behaviour, logging keystrokes, accessing and exfiltrating confidential information, downloading file systems, recording host audio and video, and much more.
Many state-backed advanced persistent threat groups also prefer to use RATs to attack organisations, because the malware is easy to configure, modify, and use. Its prevalence also provides a modicum of plausible (enough) deniability.
In the current study, researchers examined network metadata using the joint Recorded Future and Shodan Malware Hunter project, as well as the Recorded Future platform. The team looked for active controllers in the period from 2nd December 2018 to 9th January 2019.
The analysis enabled the research team to identify several RAT command-and-control (C2) servers as well as corporate networks that were communicating with those controllers.
UN report blames North Korean attackers for theft of $571 million from cryptocurrency exchanges
By Dev Kundaliya | News | 14 March 2019
Cyber-attacks by North Korea-linked groups have grown in scale and sophistication over the past three years
A new report from the United Nations Security Council (UNSC) has blamed state-sponsored hackers from North Korea for stealing an estimated $571 million from cryptocurrency exchanges between January 2017 and September 2018.
The report, which was published on Monday, noted that cyber-attacks by North Korean hackers have grown in scale and sophistication over the past three years. Moreover, those cyber-attacks have also become a vital tool for hackers to illegally transfer funds in an effort to circumvent sanctions imposed on the country.
"The ability to breach banking security is extremely worrying and raises broader questions," he added.
The UN report cited private-sector research to highlight that North Korea-based hacking groups infiltrated at least five Asian cryptocurrency exchanges over a period of 21 months from January 2017 to September 2018, stealing a total $571 million.
For hackers, the most successful hack was that on Coincheck, a Bitcoin exchange based in Japan, in January 2018.
The panel alleges that Pyongyang is supporting hackers to illegally transfers funds from foreign financial institutions to supplement its own economy.
The infrastructure required to support such attacks is not insignificant. In the case of the Bangladesh Bank theft, money needed to be quickly withdrawn from the banks it was transferred to, with some of it laundered via casinos in the Philippines.
In 2018, one UN member state wrote to the UN panel, highlighting that North Korea's cyber-focused military units are also operating in foreign countries and trying to generate money for the government.
According to Russian cyber-security firm Group-IB, Lazarus Group conducted several attacks against cryptocurrency exchanges and mining companies in 2017 and 2018. Lazarus, which is also known as Guardians of Peace, HIDDEN COBRA, NICKEL ACADEMY and ZINC, is also thought to be behind cyber-attacks against Sony Pictures Entertainment - forcing staff offline for two months, as a result - and multiple banks across the world, including the Far Eastern International Bank of Taiwan and Banco de Chile, in addition to Bangladesh Bank.
The UN panel recommends UNSC members to take appropriate measures in order to enhance information sharing on cyber-attacks by the North Korea with other countries as well as with their own financial institutions.
Countries also need to boost their network security measure to prevent attempts by the North Korean hackers to attack financial institutions in order to generate funds for their regime.
China is overtaking the US in AI, warns Allen Institute for Artificial Intelligence
By Graeme Burton | News | 14 March 2019
Apple and Google being overtaken by China's National University of Defense Technology
Investments by China's government into artificial investment from more than a decade ago are starting to pay off, with the country expected to overtake the US by some measurements this year.
According to the Allen Institute for Artificial Intelligence (AI2), founded by the late Microsoft co-founder Paul Allen, researchers in China already publish more papers on AI than researchers from the US.
While much of the research has been classified as "medium" or "low" quality, research from China is expected to beat the US in terms of the most cited 50 per cent of papers this year - and the most cited 10 per cent next year, and the most cited one per cent by 2025.
The rising quality of research reflects big investments by China's government in AI for more than a decade, and the widespread application of AI across the country, providing a market and, therefore, a business ecosystem for artificial intelligence.
That rise was reflected in last year's academic contest, sponsored by Apple and Google, to find the best algorithms to identify images captured on camera. But the winner wasn't a US university, of a submission from Apple or Google engineers, but China's National University of Defence Technology.
Indeed, according to Cady and Etzioni, of the top ten academic research institutions in China for AI, four are based in Hong Kong, although the Chinese Academy of Sciences and Tsinghua University are way out in front with 252,737 and 211,264 citations each. Only around five per cent of papers had co-authors from both China and the US.
"By most measures, China is overtaking the US not just in papers submitted and published, but also in the production of high-impact papers," concluded the authors.
They added: "Recent US actions that place obstacles to recruiting and retaining foreign students and scholars are likely to exacerbate the trend towards Chinese supremacy in AI research."
'Upgrade' elections with online voting, says cryptography expert behind Switzerland's e-voting system
By John Leonard | News | 14 March 2019
The last major upgrade to UK electoral law was votes for women 100 years ago, notes cryptographer David Galindo
If you were to design a democratic voting system designed to exclude certain groups in order to favour the incumbent political parties, you'd probably come up with something like the processes used in most Western countries today.
You wouldn't need to build insecure back doors or vulnerabilities; you'd simply make it as inconvenient as possible for certain people to vote. In the US, for example, gerrymandering is legal and employed by both major parties, likewise voter suppression.
It is also not uncommon to find polling stations sited only in the suburbs making it hard for non-drivers in the inner cities to cast a vote. People with disabilities, minorities, expatriates, young people and those serving in the military are similarly inconvenienced. It's not just the US, of course. Variations on the theme of deliberate barriers to participation are commonplace almost everywhere.
This, rather than genuine fear of hacking, is the main reason for resistance to online voting by politicians, believes David Galindo, a cryptographer who has helped design online voting systems in use in Switzerland and Australia.
"Once they've been elected by the current system they have zero interest and zero motivation to allow new voting channel that will make it easier for people who currently are not voting because of the inconvenience," he said.
The state of New South Wales only backed down after losing a 2008 court case brought by blind voters who complained of a lack of access. Only about 10 per cent of blind now read braille with most depending on electronic devices to help them navigate the world. Arm suitably twisted the Australian state decided that e-voting was the least-worst option it was introduced for certain groups.
Galindo, now a senior lecturer in computer security at the University of Birmingham, was instrumental in designing the system with the Spanish developer Sctyl. He remains a passionate advocate of online voting as a way of including disenfranchised and excluded voters who often bear the brunt of government policy and is an advisor contributor to the think tank Webroots Democracy which campaigns for the better use of technology in democratic representation.
Political resistance is not the only hurdle faced by online voting. There's also the ever-present fear of powerful state actors manipulating elections. Galindo insists there is no evidence so far of this occurring in Switzerland, Australia or Estonia and that any such actions including apparent Russian hacking of the US electoral register have all occurred within the existing system.
Nevertheless, there is extreme caution around introducing something new in the democratic process. The last big change in the way elections are carried out in the UK was when women were allowed to vote. It's a process that evolves very slowly.
There's an element of fear about new technology
"There's an element of fear about new technology, and security experts who are meant to have a say on this are very conservative," he said.
There is also the problem of conflation in the public mind of disinformation campaigns, Cambridge Analytica and Russian troll farms with online voting, whereas, in fact, they are unrelated. They also believe the current system to be more watertight than it actually is.
"No form of voting is 100 per cent secure. You always have trade-offs. Normally the arguments against internet voting consider remote attackers that do not have a physical presence in a polling station, and most hackers cannot do much against you voting in a polling station, but of course we know that every time there's an election there are some irregularities, people trying to abuse the system, inconsistencies in counting. As a margin of error if you do manual counting there can be one to two per cent inconsistency in the UK. That is a significant number."
So would the risks of manipulation and margins of error be lower with online voting?
"It depends on how you design the system," said Galindo. "We have a lot of knowledge today. Switzerland has been doing this, and Estonia and Australia on a regular basis with no problems so far. So we look at how they do it and then start from there."
To beat this 1-2 per cent inconsistency an online voting system needs to be as secure as possible and also capable of taking account of malicious actors within the system. The system designed by Sctyl uses zero-knowledge proofs, which prove that a computation has occurred without revealing anything about that computation. In the case of electronic voting, votes are encrypted to ensure privacy with zero knowledge proofs used in the decryption process when the vote is counted.
"By respecting the privacy of the vote you can open ‘the envelope' and do the counting without revealing anything private, but at the same time you can prove that all these competitions were done properly," Galindo explained.
Additional security is built in by splitting the decryption key and sharing it between a number of trusted parties within the electoral authorities.
"You split the control of a very sensitive operation between a number of parties - say seven parties - and you need let's say you decide that three or four of them agree to on them performing the quorum in order for the sensitive operation to take place."
The unlocking of the votes using the cryptographic key takes place at a ceremony with those four officials present in much the same way that the results of ballot paper counts are unveiled. While it would be possible (although difficult) for four malicious representatives to reverse-engineer the votes to see how someone voted, the integrity of the vote would be unaffected. So the system is not perfectly ‘trustless', but then neither are the alternatives, said Galindo.
"In the end, you need to place some trust assumptions in any system - even in the current system. In particular, you trust that the electoral commission and the polling stations are behaving and that only eligible voters can vote and that no one is tampering with ballot boxes. If you are a voter, you don't know this."
Provided the software is open source and open to scrutiny, and so long as it is imlimented properly, online voting should be more transparent and trustworthy while bringing elections into the modern age and encouraging the young and the excluded to make their voices heard, Galindo said.
Facebook, Instagram and WhatsApp taken down for 14 hours by suspected BGP leak from European ISP
By Graeme Burton | News | 14 March 2019
Longest downtime in Facebook's history caused by BGP leak, according to Netscout
Facebook, Instagram and WhatsApp were taken down for 14 hours yesterday in a global outage caused by a Border Gateway Protocol (BGP) leak from a European ISP, according to network management specialist NetScout.
The downtime was the longest in Facebook's 15-year history, with the company issuing a denial addressing widespread rumours that it had been targeted and taken down by a distributed denial-of-service attack (DDoS).
The services went down at around 4pm in the afternoon in the UK and were only brought back in the early hours of Thursday morning. During the outage users were still able to view Facebook but couldn't post anything.
"While not malicious in nature, such events can prove disruptive on a widespread basis. It is very important that all network operators implement BGP peering best current practices," said NetScout principal engineer Roland Dobbins.
"We saw '500 internal server errors' from Facebook. Given the sheer scale and continuous changes that these web scale providers are constantly making to their applications and infrastructure, sometimes things break as a result of these changes, even in the most capable hands," the company claimed in a statement.
It continued: "We're not seeing any BGP changes that are affecting connectivity, packet loss or latency. Since Facebook uses its own backbone network, it's not clear [and] we don't have insight as to how an external transit route issue would cause a disruption within the internal Facebook network."
For Facebook, the outage could have a significant impact on revenues, with the company losing tens of millions of dollars in advertising revenues.
"The internet celebrated its 30th anniversary this week. The whole objective was that the internet was a self-healing web. ie: one bit goes down but the system mends itself and keeps functioning. But if users just rely a few sites we all become very vulnerable," commented TechMarketView's Richard Holway.
He continued: "At least Facebook going down just meant fewer cat videos were posted. But if WeChat went down in China many people's everyday lives would be affected as they couldn't make payments or order stuff too. A bit like Facebook, Google, Amazon and all the online banking systems going down at the same time."
BGP has been at the centre of a number of internet traffic routing incidents in recent years, with claims that authorities in China and Russia have hijacked traffic several times for reasons as-yet unknown.
NHS Trust launches Robotic Process Automation Centre of Excellence
By Stuart Sumner | News | 14 March 2019
Chelsea and Westminster NHS Foundation Trust CFO Sandra Easton said that RPA at the Trust could provide a blueprint for a broader NHS rollout
Chelsea and Westminster Hospital NHS Foundation Trust, which employs around 6,000 staff caring for nearly one million people, is launching a Centre of Excellence (CoE) for robotic process automation (RPA) technology that is being rolled out across the Trust's corporate services section.
While the long-term vision is to provide a pathway to wider RPA adoption within the NHS, Sandra Easton, chief financial officer at the Trust, explained that usage of the technology is currently at a very low level within her organisation.
"Until now our use of RPA has been very minimal," said Easton. "As a sector, the NHS is very risk averse because everything we do is evidence based, there's science behind it.
"But we've seen lots of things happening in RPA in other industries. I had an opportunity to look at a finance department in different sector a year ago, and I saw how useful RPA can be.
I've got a finance accountant who spends three to four days per month doing VAT returns, and correcting errors. We've now automated a third of that work
"It needs to come into the NHS in a safe environment in the back office. We need to prove that it works, and it's safe, and give ourselves the opportunity to set up governance and structure around how we use and implement it. Once we've got evidence and proof we can take it more towards patient-facing parts of our service," she added.
A unified approach to RPA
While there are some small pockets of RPA usage across the 200 other NHS Trusts in the UK, there is no joined up or systematic approach to its implementation. Easton acknowledged that the NHS encourages its various bodies to run their organisations in their own way to an extent, but argued that its RPA strategy should be more coherent.
"The NHS is one big family, but we allow a thousand flowers to bloom, as every hospital and patient is different. But with RPA, it makes sense to have one approach in the NHS.
"So that's why we're setting up a showcase site to demonstrate how to use this technology in our sector, and we're developing the skills and knowledge in the CoE that allows us to use it as a coherent approach for rest of NHS."
Next we want to start to move the technology more towards patients. At the moment have huge teams whose sole job is to schedule patients for appointments or operations
The Trust is kicking off its own RPA implementations amongst three back office teams: finance and procurement, which counts as one team, HR and informatics.
"We're getting people trained up in all of those teams, and giving them free rein to solve problems they need to solve, they know what's ripe for automation. We're seeing some really big impact processes automated initially.
"One of early ones is our VAT reporting. VAT in NHS is very complex. I've got a finance accountant who spends three to four days per month doing VAT returns, and correcting errors. We've now automated a third of that work, which turned out to be very quick and easy to do. That's a day a month released form that person's workload," added Easton.
Releasing time, not people
Easton explained that the aim for every RPA project within her organisation is to release people's time in order to create more value to the business. In other words, it's not about looking to reduce staff numbers.
"At moment we're saying you can use RPA to add more value to the business. For people we're training in the CoE, that's a day each month that we've released, which will be invested into creating new bots in other areas.
"We also have finance business partners, and their role is to support our decision making. At the moment they have to spend time correcting data errors and uploading material. These are manual tasks, and if we automate that, that's more support we can give to the business."
James Dening, VP & European RPA Evangelist, Automation Anywhere, the Trust's partner for the creation of the Centre of Excellence, explained that RPA isn't often used as a way to reduce headcount.
"Typically we're not seeing our customers making redundancies. Instead they look to the bits of people's jobs that are repetitive, so we're taking the robot out of the human. Take away that dull stuff people do, and give them back time to find other ways to bring value to the business. That's a morale booster, it increases employee satisfaction."
The Trust decided to use Automation Anywhere to help create the CoE after putting an open tender out to the industry.
"We took a different approach to finding a partner. We made a call for innovation, and put that out to the market. We said we want to get into this space, we know there's technology to help us solve it, come to us with solutions. We had 21 organisations respond asking to work with us. Of those, we asked six to come in and talk to us, and Automation Anywhere was one of those.
"One thing which struck us was how well they aligned to our values."
The future for RPA in the NHS
Easton continued, explaining what she wants from RPA over the next two years: "I would hope for all of the implementations we started with RPA to be completed. Next we want to start to move the technology more towards patients. At the moment have huge teams whose sole job is to schedule patients for appointments or operations.
"This technology could make their lives so much easier. And the technology will advance over the next one to two years, and we'll be doing things we haven't even thought of yet."
Given that this is planned to be an example of how RPA could work in the broader NHS, how will Easton get the word out among such a huge, diverse and dispersed organisation?
We're adding cognitive capabilities onto the automation functionality. That means machine learning capabilities, document classification and natural language processing
"We've got 200 independent organisations in the NHS. The approach is very much making sure there's a high level of visibility around what I'm doing. There are a number of ways I'm doing that, including presenting to various NHS groups I'm part of. I'm also working with Automation Anywhere to get them going to other Trusts and showcasing what we're doing."
It's common for RPA projects to be run by the IT department, which is happening at Leeds Building Society. Easton explained that things are different in her organisation, principally because of a lack of capacity within her IT department.
"My IT team and our CIO are rolling out new Electronic Patient Records. That is a huge task, so they don't have capacity to do this. Also the passion and drive for RPA is in my team. The technology interface has just been about finding a server to put RPA solutions on, that's the only IT involvement."
The future for RPA
Automation Anywhere's Dening said that the future of their RPA product is to add cognitive capabilities.
"We're adding cognitive capabilities onto the automation functionality. That means machine learning capabilities, document classification and natural language processing, to name just some of the new functions.
"The aim is to give organisations ways to deal with the enormous amounts of unstructured and semi-structured data they have. We want to let customers scale and grow their automation deployments, and achieve even bigger returns on investment.
"The second big area is our bot store. That's a way of letting organisations share their RPA tools. Think about what really kick-started smartphones, it was their appstores. It's the same for automation.
"Nothing's new under the sun, and someone else will have a similar environment to you. So you expose that technology to other organisations, which supercharges automation for certain companies.
"What Sandra's doing here, she can take the IP she's building and expose it to other trusts," Dening concluded.
Google developer Emma Iwao has claimed a Guiness World Record for calculating pi to 31.4 trillion digits using Google Cloud
Pi, the ratio of the circumference of a circle to its diameter, which starts as 3.1415 and just keeps going, has many applications. Engineers use it to measure the circular velocity of things like wheels, motor shafts, engine parts and gears. It can measure things like ocean, light and sound waves, river bends and radioactive particle distribution. NASA even uses it to calculate space flights.
Most of these applications don't need pi to more than a few hundred digits, but that hasn't stopped mathematicians from continually setting and breaking records using calculators specifically built for the task.
For the first time ever, Google senior developer advocate Emma Iwao has turned away from physical machines and used the cloud to calculate pi - to more than 31.4 trillion digits (31,415,926,535,897, to be exact), which we can't reproduce here because bits would start falling off the Computing site.
Iwao set a new Guinness World Record with her work, which used y-cruncher (a Pi-benchmark program developed by Alexander Yee) on a Google Compute Engine cluster of 25 virtual machines.
She turned to the cloud because of Chudnovky's formula - a common algorithm for computing π. Basically, it states that the time and resources necessary to calculate digits increase more rapidly than the digits themselves, and it gets harder to survive a potential hardware outage or failure as the computation goes on.
The cloud, says Iwao, solved both problems. It has scalable and powerful compute, and a live migration feature to keep infrastructure up to date. Over the course of the four months (121 days) it took to finish the calculations, Google Cloud performed ‘thousands' of these migrations/
The biggest challenge in calculating pi to such extreme lengths is the massive storage and memory requirements. Iwao's work required 170TB of data to complete: about the same as housed in all of the print collections in the US Library of Congress: or about 2,833.33 (repeating, of course) copies of World of Warcraft.
One of the uses of these gigantic calculations of pi is using them to test supercomputers. Iwao said, "When I was a kid, I didn't have access to supercomputers. But even if you don't work for Google, you can apply for various scholarships and programs to access computing resources," she says. "I was very fortunate that there were Japanese world record holders that I could relate to. I'm really happy to be one of the few women in computer science holding the record, and I hope I can show more people who want to work in the industry what's possible."
Astronomers discover 83 supermassive black holes 13 billion light-years from Earth
By Dev Kundaliya | News | 14 March 2019
Supermassive black holes came into existence when the Universe was just five per cent of its current age
Astronomers from Taiwan, Japan, and Princeton University claim to have discovered 83 previously known quasars, which are powered by supermassive black holes and are located around 13 billion light-years away from us.
According to the research team, these supermassive black holes came into existence when the Universe was just five per cent of its current age. The age of the Universe is estimated at around 13.8 billion years old.
A black hole is an area in space where a massive amount of matter is packed into a very small area. Scientists believe black holes are formed when large stars collapse at the end of their life cycle. After a black hole is formed, it can continue to grow by consuming mass from its surroundings. Supermassive black holes are found at the centres of galaxies and are millions of times bigger than the Sun.
Quasars are massive objects in the Universe that emit exceptionally large amounts of energy. Quasars are thought to contain massive black holes.
In the current study, astronomers used three different telescopes to search for quasars in the distant Universe. Specifically, they used data collected by Hyper Suprime-Cam (HSC) instrument installed on the Subaru Telescope in Japan. Subaru is considered one of the largest and most powerful telescopes in the world.
The data collected through HSC helped astronomers to select distant quasar candidates for further investigation. Then, the team obtained the spectra of the quasars using three different telescopes: the Subaru Telescope, the Gran Telescope in Spain and the Gemini South Telescope in Chile.
The detailed survey enabled the team to discover 83 quasars, which were previously unknown to the scientists. Earlier, 17 quasars were discovered by astronomers in the same survey region.
The most distant quasar that astronomers discovered in the new survey is located 13.05 billion light-years away.
According to scientists, the light emitted by an object 13 billion light-years away takes 13 billion years to reach at Earth. Thus, it provides an idea of how objects looked when they emitted light 13 billion years ago, when the Universe was just five per cent of its current age.
After a detailed analysis of the survey data, astronomers also concluded that there is about one supermassive black hole present in per cubic giga-light-year region in the Universe.
The research team now wants to expand the scope of their study to spot more distant black holes in the Universe in an effort to find out when the first supermassive black hole was formed in the Universe.
Huawei can't be trusted for building 5G mobile networks, German Intelligence tells federal panel
By Dev Kundaliya | News | 14 March 2019
German intelligence officials believe that Huawei cooperates with its national secret service
Germany's foreign intelligence service believe that Chinese telecom equipment maker Huawei isn't a reliable partner and should not be allowed to participate in building country's upcoming 5G mobile networks.
The damming assessment by officials within German Intelligence comes despite a government rebuffal of US demands that Huawei be kept out of the country's 5G networks, or risk losing access to critical US intelligence.
According to Bloomberg, a senior official of the country's intelligence service told a federal panel on Wednesday that there are several past security-related events, which force German agencies to distrust the Chinese company.
Speaking at the same event, another foreign affairs ministry official said that Huawei likely cooperates with its national secret service.
"It's above all a matter of trustworthiness," the foreign affairs representative told the panel.
Earlier this week, the Wall Street Journal reported that US administration had warned the German government to ban Huawei from its upcoming 5G mobile network projects or face the risk of being cut off from intelligence-sharing.
Huawei is currently facing intense scrutiny in several countries over allegations that Chinese agencies could use its equipment for spying. The US has been pressing allies not to allow use of Huawei's equipment in their 5G networks.
But Huawei has repeatedly denied all these allegations, stating that all its equipmentais safe to use, and that the company doesn't share any data with the Chinese intelligence agencies.
In Germany, auctioning of 5G licences will start next week and, according to Bloomberg, the country's intelligence agencies have been advising the government not to allow Huawei in 5G infrastructure projects.
Earlier this month, Germany's Federal Network Agency (BNetzA) published some new requirements for telecommunication companies, which are expected to appear in draft form in the coming months and will be applicable to all telecom networks, not just 5G.
The first requirement from BNetzA demands telecom companies to procure systems only from "trustworthy suppliers" whose compliance with national security regulations is assured. These suppliers must also have provisions for the secrecy of telecommunications and for data protection.
Under planned requirements, telecoms firms will constantly monitor network traffic for any abnormal event and should take appropriate measures if there is any cause for concern.
The companies will also use only those components that have been certified by the Federal Office for Information Security.
If planned requirements are approved by the government, German firms will be forced to avoid using a single vendor, and will allow only "trained professionals" to work in security-related areas.
'Low' risk that Swiss online voting bug could have been exploited, says the system's developer
By John Leonard | News | 13 March 2019
Bug bounty did its job, says David Galindo
David Galindo was one of the senior developers of the ‘zero-knowledge' protocols used to secure the Swiss online voting system already used in several Cantons as well as in Australia. Zero-knowledge proofs ensure that each vote is counted accurately and verifiably without revealing the identity of the voter.
Currently senior lecturer in computer security at the University of Birmingham, Professor Galindo no longer works with the system's developer Scytl, but he remains a strong advocate of electronic voting as a way of enabling disenfranchised citizens to make their voices heard. Computing interviewed him recently about this and other issues and we'll be publishing that article shortly, but we asked him to comment on the story, specifically that researchers had found that the software was showing a correct computation even after the process had been interfered with.
Galindo maintains there is no cause for alarm. The code, specifications and all the documentation are in the public domain so that it can be tested before launch, he pointed out, adding that a bug bounty was put in place precisely in order that flaws can be rooted out before October.
The issue relates to the implementation of the system rather than its design and so is relatively easy to fix, he said, adding that it would be very hard for an attacker to exploit in a real-world scenario.
"Given the source code was intentionally released to the technical community for bug spotting I welcome the identification of this serious implementation flaw, however it should also be noted the ability for this vulnerability to have been exploited in practice is certainly low. More importantly, it has now been swiftly resolved and the core cryptographic design remains robust and secure."
Galindo added that the flaw was a known type of vulnerability in the implementation of zero-knowledge proofs, and that a similar error had been previously spotted and fixed in the cryptocurrency Zcash.
"The SwissVote and Zcash cases reminds us of the fact that no computer system is free from bugs. What is important is to follow industry best practices, such as validating your design through third-party certifiers and auditors, being as transparent as possible, and creating bounty programmes inviting the public to test your systems," said Galindo.
"This is precisely what has been done in this case by Swiss Post/Sctyl in a probably unprecedented move. We should celebrate the fact that this will ultimately improve the security of our voting systems".
Our interview with David Galindo on e-voting and zero-knowledge proofs will be published shortly.