Blog

This is the Blog of the technical experts at GEN and its companies

Brexit and Your Data

EU2

I was onboarding a new customer today, a b2c company in Nottingham and I mentioned that their website and email was hosted and stored in the USA, and since their customers create accounts on their website it may be an idea to consider hosting that data within the UK. The customer had no idea this was the case, or indeed the implications. They had been sold a "make it yourself" website from a well known provider who threw in email for a dollar a decade, but never considered where the actual data was being stored, or from where the website was being served. In principle, there's no problem hosting data in the USA, that's fine but for B2C businesses within the UK (and Europe) the General Data Protection Regulation imposes certain requirements on those businesses to protect customer data. 

Since the incorporation of GDPR, end users have a right to expect their personal information is stored and protected within the framework of the legislation. That is, reasonable steps have been taken to protect it from undue exposure, and rights of access, review and removal are granted. The regulator (ICO) has the power to compel companies based in the European Union to comply with this legislation or face severe penalties. A company outside of the UK/EU has no such requirements nor can it be compelled to do anything by the regulator.

This is where it all gets very muddy indeed. As a consumer within the UK you have rights under GDPR, but only for companies operating within the UK. That is, if you purchase something from Fluffy Chicken Limited, they will receive and process your personal data such as name, address, phone number, card numbers and so on, and you would rightfully expect that the processing of your data is covered by GDPR. However, Fluffy Chicken Limited's online shop is hosted in the USA, your data is stored in the USA, and the order is fulfilled by an American company, and in fact Fluffy Chicken Limited does not process your data at all, effectively removing any protection you may think you have under GDPR. Even if Fluffy Chicken Limited did process your data, they would have received it from an American company, not from you. 

When we leave the European Union in a couple of months time, data hosted in Europe may not be in scope for GDPR. In all probability GDPR will only apply to companies within the UK and data stored within the UK. 

The thin line between storage, processing and regulation

I think everyone's aware that a company within the UK that services consumers has to comply with GDPR, makes total sense, but where do the obligation end? 

Assume a UK company pays for an online shop service provided by an American company, customers visit that companies website, place order, and those orders are then transmitted to the UK company. Is the American company under any inference for GDPR? No. In fact the UK company is only responsible for ensuring GDPR for the information received from the American company containing personal information. Should the American company also provider order fulfillment (such as the case with Amazon) then the UK company would have no responsibility for data protection under GDPR. 

In our experience, customers are on the fence on this one. Some of our customers have deliberately migrated data and fulfillement offshore, whereas others have chosen a blended solution. 

Brexit

We've been involved in several large scale migrations from UK to somewhere in the EU ahead of the brexit deadline. Companies who operate throughout the EU have to decide if being based in the UK is the right choice and for many it's not. There could be arbitrary tariffs imposed or customs regulation, or restrictions. 

For our supply line with heavy dependence on Hewlett Packard, we're going to experience stock shortages and shipping issues since most of our spare parts come from the EU. 

For anyone with a .eu domain name, unless you have offices in the EU then you will loose it at the end of this year. 

 

 

 

 

Continue reading
  0 Comments
0 Comments

FreePBX as a route to intrusion and data breach

FreePBX_Logo

The History

FreePBX has been around for decades, and was one of the three popular Asterisk GUI's and our choice for years. Asterisk itself has been around longer, and we've been providing and supporting it since version 1.6. That aside, FreePBX has been constantly developed and enhanced with functions and features providing a framework to build an asterisk dial plan configuration with a nice GUI interface. FreePBX provides include files that can be leveraged to add custom dial plans whilst maintaining general management via the GUI. 

Up until 2015, FreePBX was in a constant development cycle providing regular updates, fixes and features primarily provided by Schmooze, a wisconsin based developer who provided very good commercial support and some commercial modules to monetise the operation. These commercial modules could be purchased with a 25 year license (mostly) and for many this was a great way to get some great commercial features for a sensible charge. 

In January 2015, Sangoma acquired Schmooze and from this point onwards, development slowed, updates slowed and commercially licensed modules stopped updating. Today, development on FreePBX seems to slowed significantly, and even the blog postings on freepbx.org have stopped. Sangoma are still selling the same modules and support but I hazard a guess that their commercial SwitchVOX product is taking priority which is a shame. 

The Breach

Back to the title, We were investigating a data breach at a company (not an existing customer) and working backwards from the epicentre, which was their MySQL server back to the source. We had already identified that the MySQL Server login had been 'discovered' and leveraged to select data from a range of tables from the FreePBX box. We removed the FreePBX Box, imaged it and then returned it. Analysing the image we could see some activity with the mysql -u command under the root login accessing the company's remote MySQL Server. 

I won't bore you with the nitty gritty of the FreePBX box compromise, but let's just say that it was running PHP 5 on Centos 6.5 as the majority will be, simply because the upgrade path is fraught with issues. The backup/restore function does most of the job but there will almost always be some manual correction or even a fresh install and reconfigure which dissuades operators from this path unless absolutely critical. Combine this with the relatively complex setup of NATted SIP/RTP and this promotes the bad practice of putting FreePBX on the dirty side of firewalls, which this was. 

Once the FreePBX box was compromised, there were numerous opportunities to pillage the configuration for upstream SIP credentials (stored in the clear) as well as extension and voicemail passwords, voicemail's, and other data. The hacker had created an inbound route on the switch directing a DDI call to a DISA endpoint, allowing them complete system access. There was also evidence of numerous reconfigurations of inbound routes for unknown reasons. I fully expected the hacker to create or compromise an extension, pretend to be 'IT' and then leverage credentials out of the staff, but instead they simply dumped the asterisk database and found the MySQL Server credentials stored in the clear in the superfectaconfig table. Superfetcha is a module that, when given a CID is able to pull info from various sources (as plugins) and use that to influence the CallID information passed through to the endpoint. It's not a bad module, it works and it's not the authors fault that it stores passwords in the clear (although there are other more secure options such as 2 way TLS), but the clear risk here is that any credentials you enter into it can be retrieved fairly easily and exploited. 

For this customer, their MariaDB database contained their customers, contacts, quotes, invoices, contracts and pricing, all of which was sold on. This breach was highlighted by one of their competitors picking up the phone and notifying them that they were offered the data for a very moderate fee, which was a very honest and professional thing to do. 

The Risk

VoIP servers are often overlooked by risk managers as they are thought to be 'isolated' from the things that matter, but as we can see in this specific case, a simple CID Lookup provided everything needed to compromise the main database server and export it all. I know some may comment that the MySQL login should have been restricted to a certain table or view, but in reality that just doesn't happen that often in the wild and even if a DBA created a view, a user restricted to the PBX box, and granted it select only on that view, you've still given the would-be hacker a valid and operational login to your MariaDB/MySQL database that could be exploited. ANY authenticated connection between server A and Server B creates a possible route for compromise, and you should consider carefully the risk and reward of each. 

I'm not sure what the future holds for FreePBX in the hands of Sangoma. We could see a community supported fork much in the ways of MariaDB, or Sangoma could re-ignite development and clear some of the 802 open issues, we hope so. 

IF YOU ARE RUNNING FreePBX and don't have an active support agreement then get one and ensure...

  • It's running the latest version of FreePBX.
  • It's running on Centos 7 or later.
  • It's behind a firewall with SIP/IAX NAT'ed & firewalld is setup and configured.
  • Apache is restricted to the LAN.
  • Do NOT give CallerID Superfecta or CIDLookup credentials to your database server. If you MUST use caller ID lookup then push a limited table of data to the FreePBX server's MariaDB database and query it there. This is not hard to do and once setup will function just the same as a remote lookup but will maintain that isolation between the FreePBX box and the company database(s). 

If you want to be really comprehensive in your network security policy, then segregate the FreePBX box between two firewalls creating a VLAN for it. This way you have SIP, RTP and IAX NAT'ed to the internet and your upstream providers with specific firewall rules allowing traffic to and from those providers ONLY. The internal LAN firewall will allow only SIP/RTP traffic to and from the FreePBX box and the network segment with your IP phones on it, and HTTP(s) traffic to the network segment with your users on it. Everything will work just fine but with some extra hardware (or even using iptables/firewalld) you've reduced the possible paths to compromise to a negligible level. Anything that talks to the outside allowing incoming connections, even if its just your VoIP Server is a risk that needs to be managed, and this isn't just a FreePBX issue, FreePBX is quoted here simply because this was the route to compromise in this case, but any VoIP server has the same risk factors and should be equally considered. 

If you found this interesting, comment and/or Like. If you need help and advice on your FreePBX server then use the forums for free community assistance or the HelpDesk for priority support. 

Continue reading
  5 Comments

Copyright

© E&OE

Recent Comments
Guest — Brian L
"In January 2015, Sangoma acquired Schmooze and from this point onwards, development slowed, updates slowed and commercially licen... Read More
Thursday, 05 September 2019 00:21
Technical Support Team
Thank you Brian L for your thoughts. I didn't write the article but I did see a draft about a week ago and re-reading it today I t... Read More
Thursday, 05 September 2019 09:04
Guest — Jon Harman
That may be your opinion but as a FreePBX user I think its fairly accurate. I cannot remember the last time we had a new module or... Read More
Thursday, 05 September 2019 08:13
5 Comments

Torrent Sites - The History, Mistakes and Failures

The History

Since the invention of the Internet, software piracy has been a stable activity online, and with broadband came media piracy of Music, TV and Movies. Before bit-torrent, both software and media were shared on download sites (many of which have since been shut down) but this was problematic because site owners quickly disabled links generating high traffic meaning pirated downloads quickly shifted from site to site and downloaders were forced to search numerous links to find one working. 

The appearance of Napster in 1999 promoted a distributed sharing scheme where files could be stored over many hosts, and downloaded in chunks. Napster, which is now long since gone, spawned a host of lookalike peer-to-peer sharing programs such as Gnutella, BearShare, Limewire, Kazaa, Grokster and many more. 

The appearance of BitTorrent in April 2001, thanks to Bram Cohen who designed the protocol, allowed large files to be shared in a different way. Instead of one site hosting the file, and downloaders taking that file, with BitTorrent the file is split into hundreds (or thousands) of chunks and those chunks are spread over hundreds (or thousands) of hosts. The concept is that everyone who downloads the file, then shares bits of the file with other users, quickly creating a distributed source for the file over many hosts. 

BitTorrent however, had no way within the protocol to 'advertise' a searchable list of files available. This was deliberate and not a flaw in its design. To find the files to download, websites would spring up with a searchable list of 'link's that could point a BitTorrent client to a tracker (a service that maintains a list of chunks available on various internet hosts) which was also run by the site and from that a file can be downloaded. These websites were operated free-to-use but with ad-revenue providing some income. 

Whilst there was a core following of these BitTorrent index sites, the vast majority of internet users were completely unaware they existed. Fast forward a few years and by 2003 "ThePirateBay.org" was founded by a Swedish organisation Piratbyran. After a year of operation the site indexed 60k torrent files, and by the end of 2005 the site would index 2.5M torrent files. It must be noted here that whilst the site indexed 2.5M torrent files, the availability of those files was significantly less. Regardless, this file sharing infrastructure did not go unnoticed by the American's and their 'media companies' who through the "Motion Picture Association of America" referred to as the MPAA herein, began a legal assault on such sites even though they were outside of the USA. 

The Mistakes

In 2006, The raid of the Pirate Bay's Servers by Swedish Police admittedly after pressure from the USA hit the global news headlines and millions of new users were introduced to the fantastic world of torrents. The Pirate Bay, after having its servers seized was down for a full 3 days before appearing back online and with its significantly increased user base thanks to the media coverage. If there was ever a moment in time where BitTorrent became mainstream then this was it. 

Having failed miserably to have any significant impact on the Pirate Bay the MPAA and others began a campaign to targeting home users for downloading the content, most of whom were children, but again this proved totally ineffective as well as being a source of major embarrassment. 

In the years that followed the MPAA and associates targeted various torrent indexing websites and pursued the operators until they eventually closed the sites, most notably Suprnova, TorrentSpy, LokiTorrent, BTJunkie, What.cd, Mininova and many others. The remaining, and probably the most popular sites remain active, which at time of writing are Demonoid, KickassTorrents, and of course The Pirate Bay

In the next and clueless assault by the American lead coalition of media conglomerates, governments were leveraged to enforce DNS blocks on the domain names of sites like The Pirate Bay. These 'court orders' were then imposed on major ISP's to change their systems to block lookup's for these domains, and instead show a page telling the user why it was blocked. This was not only a clueless waste of everyone's time and money, but it drove torrent indexing sites into a huge array of mirror's, each simply replicating the content of the source. The Pirate bay and many other similar sites also started hosting anonymous onion sites on the Tor network, which circumvented any blocks and gave visitors anonymity. 

This very action, not only had zero effect on visitors to their chosen torrent sites, because lists of valid mirrors sprung up to help direct traffic, but it created a network of mirrors and mirrors of mirrors all sharing and distributing torrent magnet links effectively rendering any further DNS blocking pointless. 

In July 2007, Kickass Torrents which was probably the second most used torrent index site was closed down, again by actions from the USA, soon to re-appear as katcr.co run by the same people as the original site. Once again this change was widely publicised on the internet and users searches for the new site were quickly redirected. The site moved yet again to katcr.to shortly after and this seems to be its new home. There were of course a stream of court orders forcing ISP's to block this and other domains for Kickass Torrents, but an impressive list of mirrors immediately sprung up to carry traffic around these blocks. 

The Landscape

Tor, The Onion Project began in 2006 as a way to browse the internet anonymously. That is, all other ways to browse the internet are certainly not anonymous as your ISP tracks every site you visit, every port you open and of course every file you download. Various government entities further track you activities both through leveraging ISP's and by sniffing traffic at interconnects, and of course App's and software are equally guilty of rampant privacy violations. Tor provides a simple framework to allow anonymous browsing by routing your traffic securely to an endpoint far away, often in another country. Tor which can be easily downloaded for free from www.torproject.org easily and efficiently circumvents any DNS level blocking forced on ISP's by court orders, and the sustained growth in Tor users would suggest this is not unknown. 

Regardless, today and at the time of writing The Pirate Bay, Kickass Torrents and their many mirrors are still online and still serving requests a decade later. Knowing that BitTorrent and its index sites are here to stay, we should consider why they are popular and look at the motivation for piracy and the real world effect this may have on content creators. 

The MPAA and the other interested parties, in general allege in court that every movie shared on BitTorrent is a loss to the industry, but to anyone with common sense, that's simply untrue. For many, probably the vast majority of downloaders, if BitTorrent wasn't available they simply wouldn't watch the movies, or wait until they can buy bootleg copies. Given this, the actual loss to the industry is small and in fact BitTorrent is diminishing the market for copied DVD's. 

You can simply divide BitTorrent users who download copyright material into two distinct groups, Those who cannot afford or cannot access the media that they are downloading, and those who can afford to buy but choose to download instead. 

You will not be surprised to learn that the second group, those who can afford but download instead is a tiny minority and this minority is the only group who are in fact depriving the media producers and content creators of their duely deserved revenue, and this is in fact a relatively small amount. So why the chaotic and disproportionate attack on these sites and their operators? The question has yet to be answered, but you could theorise that its a general lack of understanding of how the internet works, the actual target, and organisations desperately trying to appear as though they are doing something, even if its completely ineffective. A wise man once told me that the only winners in any litigation are the lawyers, and there's probably an element of this here too. 

The Recap

Torrent Sites like The Pirate Bay and Kickass Torrents don't host any files, they simply catalogue a list of links and those links point to trackers which then point to the actual files. Trying to close down these files is now impossible because the approach thus far as simply fragmented them into thousands of mirrors. Trying to intimidate the downloaders results in nothing but embarrassment. Trying to block sites using DNS is pointless because they are all accessible via mirrors and of course Tor. 

The Solution

The solution is a blindingly simple one, make content available legally and affordably. In the UK for example, our 'network' TV is poor by any standard carrying only out of date content with endless repeats. Our broadcast TV is even worse. Our Netflix and Amazon Prime are vastly castrated versions of those available to USA viewers, and networks like HBO, SyFy, ABC, etc are just unavailable full stop. So, if you want to access current content what options to you have? BitTorrent satisfies a demand simply because it exists.

In the future, I hope that ALL video content is available through aggregated networks like netflix and Amazon Prime to all countries and people at monthly rates which they can easily afford. Sites and networks who want users to 'buy or rent' a movie or show will slowly die out as will the use of peer-to-peer for sharing copyright media. You only need to look at Spotify or Amazon Music to see that this model actually works in the real world. Listen to any music you like at any time for a small monthly cost.

There are doubtless going to be 'groups' who disagree with that assessment, but those 'groups' are also going to be aligned with the organisations who feel they are somehow wronged by BitTorrent and Peer-to-Peer. The content creators, and producers are going to have to re-think the way media is distributed and licensed instead of desperately trying to hang on to a system that is no longer fit for purpose in the 21st Century. 

It should be noted that neither the company nor I in any way wish to promote distribution of copyright material, there are laws in place which make such activity illegal in some countries. This article simply explains to those with an interest, how and why it occurred, and possible solutions. 


The Wreckage

List of UK Court Orders to Date forcing internet Service providers to block websites at the DNS level. Can you imagine the amount of money paid to lawyers to prepare, apply for, and execute these orders? and then the cost to Independent ISP's to make changes to their systems to allow this to happen?

Date of Court Order: 27/04/2012
Identity of parties who obtained the Order: Members of BPI (British Recorded Music Industry) Limited and of Phonographic Performance Limited
Blocked Websites: The Pirate Bay

Date of Court Order: 05/07/2012
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: Newzbin2

Date of Court Order: 28/02/2013
Identity of parties who obtained the Order: Members of BPI (British Recorded Music Industry) Limited and of Phonographic Performance Limited
Blocked Websites: KAT or Kickass Torrents websites

Date of Court Order: 28/02/2013
Identity of parties who obtained the Order: Members of BPI (British Recorded Music Industry) Limited and of Phonographic Performance Limited
Blocked Websites: H33t

Date of Court Order: 28/02/2013
Identity of parties who obtained the Order: Members of BPI (British Recorded Music Industry) Limited and of Phonographic Performance Limited
Blocked Websites: Fenopy

Date of Court Order: 26/04/2013 and 19/07/2013
Identity of parties who obtained the Order: Members of BPI (British Recorded Music Industry) Limited and of Phonographic Performance Limited
Blocked Websites: Movie2K and Download4All

Date of Court Order: 01/07/2013
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: EZTV

Date of Court Order: 16/07/2013
Identity of parties who obtained the Order: The Football Association Premier League Limited
Blocked Websites: First Row Sports

Date of Court Order: 08/10/2013
Identity of parties who obtained the Order: Members of BPI (British Recorded Music Industry) Limited and of Phonographic Performance Limited
Blocked Websites: Abmp3, BeeMp3, Bomb-Mp3, eMp3World, Filecrop, FilesTube
Mp3Juices, Mp3lemon, Mp3Raid, Mp3skull, New AlbumReleases, Rapidlibrary

Date of Court Order: 08/10/2013
Identity of parties who obtained the Order: Members of BPI (British Recorded Music Industry) Limited and of Phonographic Performance Limited
Blocked Websites: 1337x, BitSnoop, ExtraTorrent, Monova, TorrentCrazy, TorrentDownloads, TorrentHound, Torrentreactor, Torrentz

Date of Court Order: 30/10/2013
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: Primewire, Vodly, Watchfreemovies

Date of Court Order: 30/10/2013
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: YIFY-Torrents

Date of Court Order: 30/10/2013
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: Project-Free TV (PFTV)

Date of Court Order: 13/11/2013
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: SolarMovie, Tube+

Date of Court Order: 18/02/2014
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: Viooz website, Megashare website, zMovie website, Watch32 website

Date of Court Order: 04/11/2014
Identity of parties who obtained the Order: Members of BPI (British Recorded Music Industry) Limited and of Phonographic Performance Limited
Blocked Websites: Bittorrent.am, BTDigg, Btloft, Bit Torrent Scene, Limetorrents, NowTorrents, Picktorrent, Seedpeer, Torlock, Torrentbit, Torrentdb, Torrentdownload, Torrentexpress, TorrentFunk, Torrentproject, TorrentRoom, Torrents, TorrentUs, Torrentz, Torrentzap, Vitorrent

Date of Court Order: 19/11/2014
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc
Blocked Websites: Watchseries.It, Stream TV, Watchseries-online, Cucirca
Demonoid, Torrent.cd, Vertor, Rar BG, BitSoup, Torrent Bytes, Seventorrents, Torrents.fm, YourBittorrent, Tor Movies, Torrentz.pro, Torrentbutler, IP Torrents, Sumotorrent, Torrent Day, Torrenting, Heroturko, Scene Source, Rapid Moviez, Iwatchonline, Los Movies, Isohunt, Movie25, Watchseries.to, Iwannawatch, Warez BB, Ice Films, Tehparadox

Date of Court Order: 20/11/2014 (expired on 11/11/2018)
Identity of parties who obtained the Order: Cartier International AG, Montblanc-SImplo GmbH, Richemont International S.A.
Blocked Websites: CartierLove2U, IWCWatchTop, ReplicaWatchesIWC, 1iwc, MontBlancPensOnlineUK, MontBlancOutletOnline

Date of Court Order: 5/12/2014 (expired on 05/12/2018)
Identity of parties who obtained the Order: Cartier International AG
Blocked Websites: Pasmoldsolutions, PillarRecruitment     

Date of Court Order: 17/12/2014
Identity of parties who obtained the Order: Members of BPI (British Recorded Music Industry) Limited and of Phonographic Performance Limited
Blocked Websites: Bursalagu, Fullsongs, Mega-Search, Mp3 Monkey, Mp3.li, Mp3Bear, MP3Boo, Mp3Clan, Mp3Olimp, MP3s.pl, Mp3soup, Mp3Truck, Musicaddict, My Free MP3, Plixid, RnBXclusive, STAFA Band

Date of Court Order: 29/4/2015
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: afdah.com, watchonlineseries.eu, g2g.fm, axxomovies.org, popcorntime.io, flixtor.me, popcorntime.se, isoplex.isohunt.to, eztvapi.re, eqwww.image.yt, yts.re, ui.time-popcorn.info

Date of Court Order: 7/5/2015
Identity of parties who obtained the Order: The Football Association Premier League Limited
Blocked Websites: Rojadirecta, LiveTV, Drakulastream

Date of Court Order: 21/5/2015
Identity of parties who obtained the Order: Members of The Publishers Association
Blocked Websites: Avaxhm, Ebookee, Freebookspot, Freshwap, Libgen, Bookfi, Bookre

Date of Court Order: 25/2/2016 (expired 31/01/2019)
Identity of parties who obtained the Order: Cartier International AG and Montblanc-SImplo GmbH
Blocked Websites: Perfect Watches, Purse Valley, Montblanc Ebay, Montblanc.com.co, Replica Watches Store

Date of Court Order: 5/5/2016
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: Couchtuner, MerDB, Putlocker, Putlocker Plus, Rainierland, Vidics, Watchfree, Xmovies8

Date of Court Order: 14/10/2016
Identity of parties who obtained the Order: Members of the MPA (Motion Picture Association of America Inc)
Blocked Websites: 123Movies, GeekTV, GenVideos, GoWatchSeries, HDMovie14, HDMoviesWatch, TheMovie4U, MovieSub, MovieTubeNow, Series-Cravings, SpaceMov, StreamAllThis, WatchMovie

Date of Court Order: 08/03/2017 - (expired on 22/05/2017)
Identity of parties who obtained the Order: The Football Association Premier League Limited (“FAPL”)
What is blocked by the Order: Various Target Servers notified to Virgin Media by FAPL or its appointed agent from the date of the Order for the duration of the FAPL 2016/2017 competition season

Date of Court Order: 25/07/2017 (expired 13/05/2018)
Identity of parties who obtained the Order: The Football Association Premier League Limited (“FAPL”)
What is blocked by the Order: Various Target Servers notified to Virgin Media by FAPL or its appointed agent for the duration of the FAPL 2017/2018 competition season.

Date of Court Order: 20/11/2017
Identity of parties who obtained the Order: Twentieth Century Fox Film Corporation, Universal City Studios Productions LLP, Warner Bros. Entertainment Inc., Paramount Pictures Corporation, Disney Enterprises, Inc., Columbia Pictures Industries, Inc.
What is blocked by the Order: Couchtuner.fr, Couchtuner.video, Fmovies, MyWatchSeries.ac, SockShare, WatchEpisodeSeries.com, WatchSeries.do, WatchSeries-Online.pl, YesMovies, Yify-Torrent

Date of Court Order: 21/12/2017 (expired 26/05/2018)
Identity of parties who obtained the Order: Union des Associations Européennes de Football (“UEFA”).
What is blocked by the Order: Various Target Servers notified to Virgin Media by UEFA or its appointed agent from the date of the Order for the duration of the UEFA 2017/2018 competition season.

Date of Court Order: 18/07/2018 (expired 13/05/2019)
Identity of parties who obtained the Order: The Football Association Premier League Limited (“FAPL”)
What is blocked by the Order: Various Target Servers notified to Virgin Media by FAPL or its appointed agent for the duration of the FAPL 2018/2019 competition season

Date of Court Order: 24/07/2018 (expired 12/07/2019)
Identity of parties who obtained the Order: Union des Associations Européennes de Football (“UEFA”)
What is blocked by the Order: Various Target Servers notified to Virgin Media by UEFA or its appointed agent for the duration of the UEFA 2018/2019 competition season

Date of Court Order: 20/09/2018
Identity of parties who obtained the Order: MATCHROOM BOXING LIMITED ,MATCHROOM SPORT LIMITED
What is blocked by the Order: Various Target Servers notified to Virgin  Media by Matchroom or its appointed agent up to and including 1 October 2020.

Date of Court Order: 28/11/2018
Identity of parties who obtained the Order: QUEENSBERRY PROMOTIONS LIMITED
What is blocked by the Order: Various Target Servers notified to Virgin Media by Queensbury Promotions Ltd or its appointed agent up to and including 1 December 2020.

Date of Court Order: 15/07/2019
Identity of parties who obtained the Order: The Football Association Premier League Limited (“FAPL”)
What is blocked by the Order: Various Target Servers notified to Virgin Media by FAPL or its appointed agent for the duration of the FAPL 2019/2020 competition season

Date of Court Order: 16/07/2019
Identity of parties who obtained the Order: Union des Associations Européennes de Football (“UEFA”)
What is blocked by the Order: Various Target Servers notified to Virgin Media by UEFA or its appointed agent for the duration of the UEFA 2019/2020 & 2020/2021 competition seasons

 

Continue reading
  1 Comment
Recent comment in this post
Guest — Johny pad
Torrentz is online now in anew version. https://torrentz.am/any?f=site%3Athepiratebay.org+added:1d also 1337x and limetorrent is ... Read More
Tuesday, 07 April 2020 05:41
1 Comment

Why not having a real switch matters

Many people believe that network traffic is just like water down a pipe, its all data and it can only go back and forth. Actually that's no where near reality, and in fact every packet of information that traverses your network consists of at least 2 layers. That is a packet within a packet, with most data being three layers, a packet within a packet within a packet. The reason for this is the separation of protocol and transport. To clarify that a little, the transport is most likely "Ethernet", and the protocol is most likely "TCP/IP" but it doesn't have to be. When you purchase network equipment, you'll be buying an "Ethernet Card" for your workstation, an "Ethernet Switch" for the network and so on, and so it's clear that the transport is "Ethernet", but even here there are differentiations. 

The standard for Ethernet

The current standard is IEEE 802.3 within which we can have a selection of physical connectivity with different cable types and wiring requirements. Let's look at a few common ones. 

10Base5 was the first real "Ethernet" implementation, over thick Co-ax RG-8 cable and commonly known as "Thicknet", this transport was the standard for DEC and other mini/mainframe systems from the early days. Thicknet required an external transceiver for every connection that converted the RG-8 co-ax into D-type AUI connector that then connected with the computer or equipment. 

10Base2 was where "Thin" Ethernet first came to the market. Using the much thinner and easier to work with RG58 Co-Ax cable Thinnet quickly became the standard for local area networks back in the 90's. There were no 'switches' only 'hubs', sometimes called concentrators which where just dumb connections between runs of Co-ax. Towards the end of the life of Thinnet, a few manufacturers such as 3com did release more intelligent routing equipment but with 10BaseT on the horizon uptake was limited. 

10BaseT, was the first structured cabling specification, providing 10Mbps max over four pairs of wires and introduced the Category 3 specification (often just called CAT3). In the past, 10Base2 and 10Base5 consisted of a long run of Co-Ax with computers hooked into that long cable. When something on the network broke, everything broke and fault finding a problem on a long run that strung through several offices was a real pain in the backside. Structured cabling did away with that and instead every computer had its own cable back to the hub or concentrator which were now a little smarter. Fault finding was now as simple as unplugging each computer until everything worked again - much better. 

100BaseT(X), was an upgrade to 100Mbps over CAT5 but also ushered in the 'network switch' which was like a hub but actually had some intelligence. In the world of the Hub each packet sent to port 1 was sent out on port 2, port 3, port 4, port 5..... port 24 and the computers on those ports simply ignored the packet if it wasn't for them. The Switch did away with all that nonsense and instead watched packets on its ports and built up a table of devices on each, often called the ARP or MAC table. Using this table, the switch could now send the data ONLY to the port that hosted the destination computer. This magic also greatly reduced the traffic on other segments (connections between switches) so the whole network was significantly more efficient. The more expensive switches, so called "Managed" allowed network engineers to login to them and to see traffic statistics, errors, activity, etc which greatly reduced the need for engineers to be standing in front of equipment in order to monitor its operation. 

100BaseFX, sometimes called 100Base-X is the same traffic but instead of using wires, it uses fibre. 

1000BaseT(X), the Gigabit Ethernet Standard with 1Gbps speed and full duplex. This is probably the most common implementation in use today and requires CAT5e or later. 

Other Standards

The 802.5 Specification introduced "Token Ring", a 16Mbps network that operated as a long loop where each computer would relay data that wasn't destined for it. Token ring was big with IBM and at the time was probably one of the more reliable infrastructures, but came with a stiff price tag. 

The 802.11 specification is for Wireless transmission and has a number of sub sections such as A, B, C, F, G, M, and AC. The actual physical requirements of seeing data over the air is different to that of wired, and yet the original "Ethernet" was so called because it was designed to be wireless. That aside 802.11 is a complex and evolving specification providing every increasing transmission speeds and distances. 

The OSI ModelBack to the Data

So now we know that different physical connectivity such as a long string of co-ax, a token ring, and wireless all use different methods to send and receive data, and yet they can all send TCP/IP, or Vines, or NetBUI or IPX so how does that work? Well its really simple, the OSI names the physical data transport as Layer 2, Layer 1 being the actual voltage/frequency/waveforms of the signals used on the wires or over the air. The Networks Cards, hubs, switches all send and receive 'data' using Layer 2. In Layer 2, all devices on the network have an address, but it's not an IP Address, this is a MAC (Media Access Control) address. You will see this MAC address shown on the back/bottom of any router, managed switch and network card. This MAC address is the physical address of the devices and if you were to tap into the network and monitor traffic you would see nothing but Layer 2 data, From physical address 01-23-45-67-89-AB-CD-EF to 01-23-45-67-81-22-C4-FF for example. The switches keep track of these physical addresses and deals only in with these packets. 

The actual 'data' which can be TCP/IP is then encapsulated (enclosed) within this physical layer data and its the job of the network endpoints (Computers, and Servers) to translate the TCP/IP Address into the MAC Address, stuff the data into the Layer 2 packet and then send it. Likewise upon reception, the endpoints will extract the TCP/IP packet from the Layer 2 packet and then pass that onto the operating system. This may seem over complicated but in fact it's essential. TCP/IP is not the only protocol in use today AND Ethernet is not  the only physical transport. By separating the physical and data we solve a world of problems and can have TCP/IP travelling over Ethernet, Wifi, Fibre, CDMA, GSM without having to care how it gets there. Likewise, we can have a range of protocols co-existing on the physical network without any impact on its operation. To use an analogy, consider sending a letter to a friend. Layer 1 would be the roads, the postbox, the postman's van, the sorting office, various hands and machines. Layer 2 would be the envelope and the address on the envelope is the physical address, and the contents of the envelope would be the Layer 3 or TCP/IP data. The envelope, being Layer 2 doesn't care how it gets to the physical address written on it, and the note inside being TCP/IP has no concept of how it physically gets to the friends house, it enters the envelope at your house and emerges at your friends house. 

Back to the "real switch matters"

A real managed switch brings some intelligence to the landscape and is able to not only route packets more efficiently across the network but also monitor the network for issues that could cause problems. Most good switches these days are able to do Layer 1 hardware diagnostics of the cables attached, as well as monitoring network events such as collisions, errors, storms and floods, and having this oversight can be invaluable when dumber hardware is connected to your network.

The reason for this article was a long term hardware issue with a broadband router that would intermittently loose its connection for no apparent reason. Engineers would be on site, monitor the broadband circuit and couldn't see anything wrong with it. Extensive broadband diagnostics showed a line in perfect health and when leaving network test gear connected to it over a weekend we could see absolutely no issues. Reconnect the router and within a few hours it would be out of service again. 

Ha! it's the router! Well we'd replaced that already, twice in fact and no change, so what the fluffy fruit is going on here? 

After a lot of what if's and a smattering of customer frustration, we sent in the Level 3 guys with their laptops and the issue was finally identified. Jefferson's Jellies! exclaimed the Level 3 tech, It's the network!. More specifically a network cable that had been crushed behind a rack. This crushed cable was causing intermittent cross-connection between several pairs on the CAT5e cable, and this was causing garbage to be transmitted. A smart network switch would have noticed this and taken action to resolve, but the customer only had a dumb switch and the router (A Draytek 2862) again had no intelligent network interface. The garbage on this crushed cable created what is technically known as a packet storm which quickly saturated the network and more interestingly caused the Draytek 2862 to drop its PPP connection. 

Network Certification ReportThe moral here is a simple one

Why a packet storm on the network port of a router would cause the PPP connection to drop is something I can only speculate about, and this behaviour sent us looking in totally the wrong place. Had the customer spent a little more on a good switch then (a) it would have dealt with the packet storm, and (b) we could have asked it easily where the problem was. 

In all fairness  to the technical teams, we rarely provide a broadband only service, and we would normally manage the network and if not already installed, we'd install managed switches, but this was a tiny remote office for small business customer, with a network installed by Pete the Plumber and all they wanted was a good *reliable* broadband for SIP. 

Get your network installed by a professional and ensure you receive CERTIFIED test results. They will look something like the report on the right, with a page for every port on the network. Any good network professional will provide these, but Plumber Pete can't and won't. If you are moving into a new building that already has structured cabling or you suspect your cabling was installed by Plumber Pete or his mate then having it certified is a simple and cheap process. As for a good Managed Switch, I'm not going to start recommending brands because to be honest most brands are ok for most networks, and it's not a case of the more expensive the better, although some will tell you different. 

 

 

Continue reading
  0 Comments

Copyright

© E&OE

0 Comments

Protecting Your Synology NAS from Internet Threats

Screenshot-2019-07-25-at-13.08.35

Today we were informed by Synology that large scale brute force attacks are targeting Synology NAS devices accessible from the internet, and whilst this is fairly easy to thwart the default configuration (Depending on the version of DSM when you first installed your NAS) does not. These changes will protect your NAS from the majority of internet threats but not all, and we'll deal with those later. Firstly let's look at the steps we need to take initially. 

The admin user

When you first install your Synology NAS the administrative user is called admin. This is bad because you don't even need to guess it, its always admin, but we're not stuck with it and we can resolve this by creating a new administrative user and then disabling the admin user. To do this, login to you NAS as admin and then go to control panel, and User. User Creation Wizard Add GroupUser Creation Wizard

Be sure to select a GOOD password. That is a password of at least 12 characters and containing at least one upper case, one lower case, one number and one symbol. An example would be S!n0LoG6nAs% but please don't use that one, pick your own. 

Next you need to select the group and in this case you MUST pick administrators or there will be real problems later on. As long as you've selected Administrators as the group then you can safely NEXT through the following screens since the administrative group has access to all things. 

Now that's done, logout of your NAS and then login using your newly created user. 

Assuming that works, go back into the control panel, select User, select Admin and then EDIT. Now place a tick in the "Disable this account" box, and make sure "Immediately" is selected below it. This will disable the admin account so that it can no longer be used to login. Press OK. From this point onwards you will not be able to login to your NAS using 'admin', but you should continue to use the new account you're currently logged in as. *IF* there is more than one administrator then create a second login, make it a member of Administrators instead of sharing logins, which is bad practice on many levels. 

*IF* You have other users logging onto your NAS, you may want to take a look at the Advanced Tab from User and set some password strength rules along the lines of the above. Whilst a non-administrative user has greatly reduced access, a compromised account can still do long term damage to your business so ensuring passwords expire periodically, and making sure they are strong is best practise. 

2-Step Verification is a good idea, but unfortunately Synology do not support the common two factor authentication tools like Ubikey, etc. So for now, unless you want to have a mobile phone with you all the time, leave this turned off. 

Account ProtectionAccount Protection

Your Synology NAS has some powerful features to protect accounts from compromise, but you need to turn them on in some cases. You will find these in the Control Panel, Security and Account Tab. 

Enable Auto Block - This is probably the most important feature here and will block or ban IP Addresses that fail password authentication a set number of times. The best practise setting here is Login Attempts = 4, within 60 Minutes, and disable "Enable block expiration". 

This does mean that from time to time a real user will lock themselves out, but you can remove that block from the Allow/Block list button just beneath the settings. 

Untrusted and Trusted Clients

One really powerful feature of your Synology box is the ability to differentiate between clients and set different limits for each. In our example left we're giving trusted clients 10 login attempts within 5 minutes, but untrusted only get 8 attempts in 999 minutes. Unfortunately Synology won't allow you to set 1440, i.e. 24 hours since 999 is the maximum minutes you can select. Feel free to change these as needed, and if your genuine users screw up then you can manage their restrictions using the "Manage Protected Accounts" and "Managed Trusted Clients" buttons by each section. 

Firewall

The Synology NAS Firewall can be unnecessarily complex to setup, but its also very powerful when used correctly. You should open the firewall configuration from the Firewall Tab in Control Panel / Security, and select the currently active Firewall Profile. In the next dialogue you can configure a number of rules (and in the case of larger NAS units, the interface upon which they apply). The Synology Help does a good job of explaining the firewall rules but we'll give a brief overview here. 

Firstly, you need to understand how the firewall works and what it actually does. The firewall inspects each incoming request to the NAS, and looks at the SOURCE ADDRESS, DESTINATION ADDRESS, PORT and PROTOCOL for each. The firewall then compares that against its allow rules to determine if the request should be honoured or rejected. An example would be to allow access to DSM on port 5001 via TCP for your LAN only. In this case, assuming your Lan is 192.168.1.0/24 then we would setup an allow rule for 

Ports 5001, Protocol TCP, Source 192.168.1.0/24 

By default if no rules are matched for the source, destination, port and protocol the packet is rejected, so please do not change this unless you know exactly what you're doing. 

Synology makes it easier to add rules by allowing you to select applications instead of ports, but behind the scenes it just converts applications to ports and protocols for you. The source address can be a single IP, a subnet, subnet's or location. Now be careful of location because whilst its pretty good its not foolproof and we have had experience of users allowing GB (Great Britain) but clients being rejected even though they are in the UK. Remember you can add multiple rules here but every rule you add is another risk so limit the applications (ports/Protocol's) exposed to the internet unless absolutely necessary. 

The Security Advisor

Security Advisor

Synology have empowered your NAS with a tool which can analyse your security configuration and make suggestions on how to improve it. Not all suggestions must be acted upon but having the analysis is really handy. Open up the Security Advisor and then select RUN. 

After a few minutes you should see an overview of recommendations. Hitting Results on the left gives you detailed suggestions and some guidance. 

Don't panic, there may be lots of red triangles but these are just warnings. For example, we have unencrypted FTP enabled, because some dumb devices still need that, and that gives us a red triangle but we know its ok and we can ignore it. Do the same for each, understand the warning or suggestion and take action. 

Evolving Environment

Unfortunately, flaw's and exploits are being discovered daily and whilst Synology is very good at releasing fixes and patches for discovered vulnerabilities, you should never rely on them to be infallible. Instead ensure you have a good, tested and working backup strategy for your NAS so that in the event that your NAS is compromised, data is lost or damaged, you can swiftly recover with minimal loss. 

At this point its worth mentioning that GEN not only provide Synology Technical Support but we also host Synology RackStations in our datacentres and unless you want to DIY it, we'll take care of all these things for you including disaster recovery. 

Continue reading
  0 Comments

Copyright

© E&OE

0 Comments

Breaking the Digital Chain

Display SignI think everyone knows that we specialise in high encryption data links, anything from single layer IPSec right through to triple layer tunnelling and disparate routing, but for some customers that's not enough. 

The internet is a collection of many separate networks connected together in a way that traffic can move freely from point A to point B via the most efficient route (most of the time). This routing is automatic and can only be influenced slightly from the A or B end. Interception of data travelling across the internet in its encrypted form is easy but it's virtually impossible to decode. You can determine however, the point of origination and destination, to snoop on encrypted tunnels you would need to compromise one of these. We make this far harder by using relay nodes in disparate jurisdictions so that the packet journey instead of being from A -> B is now from A -> Node1 -> Node2 -> Node3 -> Node4 -> B. This means any sampling of traffic between Node1 and Node2 for example would reveal ONLY Node1 and Node2 but not A or B. In order to locate the traffic source or destination ALL the relay nodes would need to be compromised upstream or downstream. That is, from a sample between Node1 and Node2 you would need to physically compromise Node1 to find A and Node2 to find Node3, and Node3 to find Node4 and Node4 to find B. In reality we don't use 4 relay nodes but instead use a minimum of 10, usually more. The nodes themselves are a mixture of cloud hosting (free and paid), virtual and physical servers, Tor network nodes and public proxies. Some of the 'free' services are less reliable but a little smart routing solves this problem. 

You can see however, that there IS a way, given enough resources and physical access to the locations holding nodes along the path to discovery the A and B ends and then compromise them. We needed a way to break this digital chain for one customer so there was no internet 'connection' between A and B. This method of breaking the chain is known as out-of-band routing and often utilises either private networks or non-data networks. A good example of out-of-band routing was to use Modem's and have part of the route travelling over analogue phone lines but in recent years this has become less secure because as technology evolves its easier to discover and monitor calls between countries and determine start and end physical addresses. We can make this harder by placing the originating or receiving modem in countries with antiquated telephone systems but then you run into data issues and a reduction in the already low bandwidth. 

A new customer provided the impetus for finding a new way to break the chain in a new out-of-band transport and the solution was ingenious, even if I do say so myself. Whilst the job has now completed and the route is no longer in use, I will not be sharing the location, country or any media from the actual connection because that would be idiotic. The image above is a library shot of a Marvel digital display board and not the manufacturer used and is there for illustration only. What I will do is describe the adopted solution to the problem.

After much discussion and consideration of point to point laser, microwave, hijacking local radio and piggybacking satellite TV we were in the unusual position of being at a loss. We know what we wanted to do, just not how to achieve it and this continued for about a week. One idle Tuesday during lunch one of our tech's was browsing CCTV live feeds on Youtube and like a clap of lightning the idea was born. We would utilise a CCTV Live stream and some form of transmission to act as the out-of-band route, but how would we transmit the data? Luckily the data rate is very very low consisting of bytes per minute instead of the usual Mb/s and was unidirectional (one way) so it could be something very analogue but it took another full week to come up with the transmission medium. 

Whilst browsing hundreds, if not thousands of live cctv feeds we noticed in one, an unremarkable street with an equally unremarkable bus stop with what looked like a digital sign on the end. We spent some considerable effort trying to identify the company name from the bottom of the sign by enhancing the imagery and applying various filters only to later notice that one fo the ad's loosely translated as 'advertise here' call this number. 

Calling the number, finding a translator, calling the number again this time with the translator and after some some negotiation we purchased ad-space on this specific sign for a period of time, and no we didn't use a credit card. After receiving the credentials we were able to access the sign's http interface (yes http) and upload some sample ad's which were in fact just images that are played in a loop with various transitions. Some php code later and we had the auto-upload working. 

Now we need to figure a way to transmit data from the sign and pick it up again in the CCTV live feed. After many attempts we settled on an ad which consisted of a number of white squares on a black background with some unimportant company branding which we totally made up. Streaming the CCTV image was simple, converting it from frames to separate images, removing all the duplicate images leaving only the changes was a fairly simple if not a laborious job. Parsing the images to determine the actual data took a little more time but again wasn't hard to do. From a technical point we broke the image into separate area's, calculated the intensity of the area as an average and then converted that into a simple 0 or 1. I know some of you may be thinking that we could have gone with barcodes, QR codes and others with much higher data density but this specific project required very low data rates.

We tested the solution over several days sending images to the display panel in batches of 1, recording them from the live stream, decoding them and comparing the data. A few changes were made to the layout and intensity was lowered during the dark hours to improve capture clarity. We did occasionally have people standing in the way but we wrote code to detect this by including a persistent layout of squares that we then verified before decoding the data and sending it on. 

Now we need to setup the rest of the route, which we did in 14 nodes between the A end and the remote server that was taking bit data, converting it into an image and uploading to the sign, and another 10 nodes from the remote server capturing the live stream, decoding and forwarding on the data to the B end. 

As we had no access to the A or B end we were unable to test further ourselves but our customer tested the connection themselves and were satisfied with both the performance and the security strategy. 

Its important to state that we do not know the purpose of, the data transmitted, or indeed even the customer as we were dealing with an agent, sometimes called a proxy, but we do know the customer was happy with the solution. Remember that this is an extreme case with uncommon prerequisites and for most a string of relay nodes and two layer encryption is more than sufficient to prevent unauthorised interception. If you can think of a better way to achieve out-of-band communications then please leave us a commend, if you found this interesting then please rate it. 

Continue reading
  0 Comments

Copyright

© GEN, E&OE

0 Comments

CDN's and the recent trend of Blacklisting Genuine Customers

Screenshot-2019-05-28-at-13.27.34

One More StepThere has been a recent shift towards using Content Delivery Networks to distribute content rather than hosting in a conventional way, and this brings with it a selection of good and bad. One of the regular issues we receive at the HelpDesk is primarily generated by Amazon Cloudflare which offers 'free' content delivery, making it a popular choice for smaller websites. The most common complaint is the screen right "One more step", which prevents the customer from visiting the website without completing the infamous Google ReCaptcha. Given the serious privacy concerns surrounding Google ReCaptcha would it be Amazon or the website owner who is responsible for *not* highlighting this to the end user? Regardless, our standard answer in this case (and it's a canned response now) is "Unless this website is business critical close the tab and select another website". There is some suggestion that these messages are generated in an attempt to rate-limit or reduce load on either Cloudflare or the vendors website but this is unconfirmed. 


So what causes Cloudflare to blacklist business customers from visiting their vendors websites? Amazon will claim that they blacklist IP addresses that exhibit unusual traffic as well as those on commercial blacklists. That sounds great in theory, but in fact with the vast majority of client IP's being dynamic (including mobile devices) this blacklisting simply prevents customers reaching vendors and for no technically good reason. If their blacklisting wasn't inherently flawed then we would not see the volume of Helpdesk requests on this very issue, with genuine customers trying to reach genuine vendors, and its for this reason that we no longer offer Cloudflare as an option on our hosting services. 

Error 1005
Another example of Cloudflare blacklisting, this time suggesting that the website owner enabled this block is the message to the left with "Error 1005". In this case we're shown that the network AS8560 is blocked from accessing the site. This HelpDesk ticket was raised by a customer who was in fact in Germany, using a tablet in a coffee shop and who wanted to check who had been blowing up their mobile. There are of course other sites which I'm sure satisfied their curiosity but the customer was concerned that the message may have been an issue, because quite honestly to the end user it is a little intimidating. 

Access DeniedIn the "Access Denied" message to the left we again have a genuine customer who was trying to access their account on a vendors website, and yet again we're told that its not going to happen, this time suggesting that the client is somehow responsible for an online attack. They are of course not responsible for anything except trying to access their vendors site, but again this sort of message just generates HelpDesk requests, takes time and effort to explain to the customer they've done nothing wrong and that they should consider another vendor in future. In this particular case "Error 1020" indicates that the website operator has established this block as a firewall rule which you would think was intentional but I can't speak for the site or site owner. 

That's enough of Cloudflare, which is after all a free service for most and with that you cannot really complain if you knew it was happening, the very issue here though is that the vendors operating their websites in most cases are unaware that customers are being turned away or impeded from visiting. The prevalence of Cloudflare means that once a customers IP is blacklisted, a good few sites in their daily browsing will all be met with the same resistance. You could say - Contact the vendor, but how do you do that when you can't access their website? 

Cloudflare is not alone and there are a growing number of alternative Content Delivery Networks all bringing their own flavour of issues to the market, preventing customers from visiting vendors and there can be nothing worse to a growing e-business. I understand that protecting the business from 'attack' is a good idea, but in reality we're not protecting them from anything, what is happening is the content delivery network is protecting itself from excessive load at the vendors expense. 


One effective but equally concerning method around this is to use a free proxy server, and the internet is full of them - just search "free proxy server". These servers whilst for the most part are safe, have the ability at the protocol level to intercept your traffic, even that over HTTPS which presents a clear danger. Whilst it's beyond the scope of this article to discuss the technical ramifications of http proxies our recommendation is please do not use them. 

Summary

The idea of CDN's is great and has a mostly positive effect on content delivery and site speed, but when your CDN starts blocking customers, either itself or due to (mis)configuration from visiting your site then you need to asses the overall benefit to the business. In other words what is the likelihood of your website being 'attacked', and in 'attack' we mean an attack that a CDN can block (which is actually very few) verses the potential lost business due to customer rejection. It's a hard one and as CDN's become more popular I think it will be increasingly relevant. 

Looking through 3 months of tickets raised in our Support/Web/Browsing channel and selecting a few from the list I find: 

  • analog.com (analog devices) access denied - customer was looking for components for project - went elsewhere. 
  • semrush.com : various - customer was trying to access account - gave up trying. 
  • moneysavingexpert.co.uk : one more step - customer was following link from google - filled box still rejected. 
  • fiver.com : one more step - customer was trying to buy services because we are 'too expensive', got to love tickets - customer went to seoclerks.com instead. 
  • yell.com : forbidden - customer was trying to find business phone number - directed them to alternative website. 
  • yelp.co.uk : sorry you are not allowed to access this page - customer was trying to check reviews - went elsewhere. 
  • scottishpower.co.uk : one small step/not a robot - customer trying to contact company - agent found phone number for customer and advised them to compare prices. 
  • rswww.com : permission denied - customer trying to purchase components - customer went to another supplier.
  • royal applications.com : An error occurred in retrieving update information - This took 4 hours of helpdesk time to determine that the update url "royaltsx-v4.royalapplications.com" is a cloudflare url and being blocked. 
  • rigol.com : one more step - customer was trying to compare equipment specifications - customer attempted to complete captcha but was then told they were blocked. 
  • talktalk.co.uk : Request Rejected - customer was trying to report a fault on their service - customer was persuaded to source bb elsewhere. 

There are many more, and a lot of tickets don't actually specify the website but you get the idea, from our small subset of customers 46 of them gave up and were advised to go elsewhere. There's no way to tell how many successful customers were able to access these sites and how many we're presented with stupid rejection messages so our sample set is the only indicative data we have, but its statistically significant in this scenario. 

 

 

Continue reading
  1 Comment

Copyright

© 2019 GEN, E&OE

Recent comment in this post
Guest — Sicar Vandehaus
There you have it! I was in Germany last week, couldn't remember how to use my scopemeter with three wire measurement but I knew i... Read More
Sunday, 09 June 2019 13:28
1 Comment

A VPN is Unlikely to Protect You

vpn-512

It seems that the Internet, and Social Media (especially YouTube) is full of advertising for VPN's so you can somehow access the internet in a covert way, but what they don't tell you is that for most people a VPN does absolutely nothing except empty your wallet. VPN stands for Virtual Private Network, and VPN's have an important role when you want information encrypted between two endpoints. GEN Uses a highly secure VPN (Our SAS Service) built on Juniper Pulse Secure which enables our customers to connect to our Intranet and from there access their companies private networks. GEN SAS provides three important roles; (a) It authenticates the end user, (b) It encrypts all traffic from that end user to the Intranet, and (c) it provides for privilege enforcement so that some users can only access some resources from their company. End User VPN's such as HMA, NordVPN, SuperVPN, UltraVPN, SafeVPN, CyberGhost, ExpressVPN, IPVanish, SaferVPN, PrivateVPN, Hotspot Shield, StrongVPN and many more advertise that they have Unlimited Bandwidth, Zero Logging and a plethora of technical misnomers to entice the uninformed into parting with their hard earned cash for the promise of anonymity. 

Will a VPN protect me?

That's very simple, as long as you don't use it on the same device you regularly use for internet access then possibly, but unlikely. To understand exactly why that is, let's first understand what the VPN is actually doing for you. 

How a VPN works

VPNWhen you access the internet, traffic from your devices (Pc's, tablet's, etc) goes to your router, the router has the job of forwarding your requests to the internet, and receiving data back from the internet and relaying them to your devices. Your router will appear on the Internet as one IP Address (usually) and this IP address will either be fixed (static) to will change from time to time (dynamic). Your ISP knows which IP address you are using at any point in time because your router 'authenticates' with the ISP when it first connects. From the ISP's point of view your router is assigned an address from its pool (either the same every time - Static, or a random one -Dynamic). Because your ISP knows which IP Address you are using at any one time, and because 'most' ISP's use traffic shaping then they can prioritise or delay traffic of certain types, as well as maintaining logs of what you access and when. As a Business ISP, we don't prioritise or delay anything but for the purpose of this article we're going to assume the majority of our audience could be domestic users. 

A VPN establishes a software 'tunnel' between your device and a server on the internet managed by your chosen VPN provider. Now all traffic that is sent to the internet will instead be sent through this tunnel and the IP Address that originates your traffic will be the IP Address assigned to the VPN providers server. Likewise, traffic received for you will be routed back through the same software tunnel to your device. There is optional encryption of varying strength provided by a software VPN and different providers will use different methods and strengths. 

I want to draw your attention to the image right, which came from the site advertising VPN Services for a price, and I used this image for three reasons; Firstly, its a good image and whilst mildly entertaining does show how a VPN works, Secondly, their site is generated almost entirely of javascript which then builds the HTML page from resources, this isn't completely unusual considering its a WordPress site, but I found the method they used to obscure images was interesting, but of course easily overcome. Finally, the image clearly shows how the VPN works, and highlights to me at least that there are two glaring weak points in this setup; YOU and the VPN Server. Compromising either gives the game away and its not impossible to do. 

Using a Browser via the VPN

When you visit a page, such as 'google.com', your browser is kind enough to share with google.com the contents of any cookies stored in your browser, these cookies are created and updated every time you visit a particular website. Google for example uses 6 different classifications of cookies, many with multiple cookies each and spreads cookies over 17 google domains that they list in their privacy policy. These cookies IDENTIFY YOU explicitly. Every time you login to any google service such as youtube, google, gmail, etc your identification is stored in cookies. Using a VPN has zero effect on google tracking you via its cookies so even though you IP has changed and may even be in a different country, google knows who you are. This is not limited to google, but pretty much all websites you visit will have some sort of tracking data in cookies. You can of course clear these cookies manually, but the first time you use a google server, facebook, twitter, intstragram, pinterest and so on, the game is over and you're identified. 

Some browsers more than others are also leaky. For example, many browsers today have plug-ins or built-in features that send every website you visit to the browser developer or a third party (such as your antivirus provider) to 'check' for phishing or fraudulent sites, but with that data also goes personally identifying data. If your using your VPN, the the same data will travel the VPN therefore identifying your new IP Address. Turning all this off is not a simple process but its do-able in most browsers. Additionally browsers and operating systems exhibit a range of security vulnerabilities that can be and are exploited regularly by carefully crafted javascript, a plug-in or extension or as a downloadable application which are able to access not only cookies but identifiable data such as serial numbers, license numbers, and with very little effort your real IP Address. The technical strategy to achieve this is way beyond the scope of this article, but trust me it can be done and it's not that hard to do. 

Using an Application via a VPN

So you've decided that your never going to use a browser on your VPN and that's a great start, but you should know that on Windows, your operating system is communicating with Microsoft almost constantly, your antivirus product is communicating back to base constantly, even your keyboard driver could well be calling home to check its version etc. So your identity is being given away on an almost constant basis to a wide and varied range of companies. Stopping this is pretty much impossible with Windows and MacOS, but it is do-able on Linux with some effort. 

Using email via a VPN

Using email requires two things to happen, firstly your device needs to connect to the mail server which stores your email. For our customers that server is probably mail.genzone.net, this server records the fact that you have logged on to your mailbox, and your current VPN's IP. For GEN this information is only kept for 36 hours after which time its purged, but the majority of other email providers such as Microsoft (office365, hotmail etc), Google (Gmail, GSuite etc), and many more will keep this information for considerably longer, and of course they will share it internally to connect your IP to your identity. 

DNS Leakage

DNS is the Domain Name System and is used to convert a domain name, like www.gen.net.uk into an IP Address. When using a VPN, DNS Queries SHOULD be intercepted and handled over the tunnel by the remote server, but this is often not the case leaving DNS queries to be sent to your ISP. This allows your ISP to see every website your visiting, but not the actual content which will go over the VPN tunnel. 

Using a VPN to bypass GeoIP

Some commercial services such as Video-on-Demand will check the country associated with your IP Address and reject those outside of coverage. In most cases, this occurs with USA networks such as HBO, SYFY, Discovery etc and using a VPN that will allow you to connect to a server in the USA may temporarily bypass this restriction, and assuming that is you have a billing address and bank account in the USA to setup the account. Even then the performance is often so poor that watching video on demand from the USA over a VPN is problematic even if it works at all and of course these companies are actively working to blacklist VPN Service IP's. 

Google, Facebook, Twitter, and pretty much all commercial websites are actively working to add VPN servers to a list of IP's that are banned. Google for example rarely works from a VPN instead complaining that 'unusual traffic' has been received, and services like video-on-demand are also quick to blacklist VPN servers from their services. The company MaxMind commercialise a maintained list of VPN IP's with "Anonymizers can cause headaches for companies attempting to identify who is visiting their website. The GeoIP2 Anonymous IP database provides insight into your traffic by identifying IP addresses which are used as various forms of anonymizers".

How can I be covert online

There are certainly ways to do this, but it requires some discipline and structure. Firstly the Tor Project provides a complete package of browser and VPN that's free to use and very secure (I recommend you make a small donation to the project if you use it regularly). You must still ABSOLUTELY NOT login to any websites using this service or once again you're identified, but you are otherwise reasonably covert. Applications and your email client cannot use Tor so they will not give away your ID. (There are some situations where you can setup Tor to route all traffic but this is not the default configuration, requires some work, and is definitely NOT recommended). 

Using a virtual machine, preferably linux, can provide you with a 'covert' presence since you will ONLY access the VPN via this virtual machine, and again providing you DO NOT login to any websites or use any applications on your virtual machine that are shared with your local machine.

Breaking the VPN

A VPN by default is point to point, which means that you will have a tunnel from your device to a remote server managed by a company. This presents an inherent weakness in your protection because by compromising the server you're connected to, both your identity and traffic can be exposed. VPN providers will tell you that there's zero logging, but that's rarely true because if there was no logging then how could they validate your credentials and respond to any support requests? Even without logging, many of these providers are buying traffic from an ISP who certainly does log and probably capture traffic. Should an agency require to identify the user then they would only need to compromise one physical endpoint server in order to do so and we know this has happened in the past. 

In Summary

Using a VPN service like many listed above will give you some limited protection providing you are using a virtual machine and NEVER use credentials to connect to any website unless those credentials were created specifically from your virtual machine and never used elsewhere. Its hard work and I'm not sure anyone going about their lawful business would want to put this much effort into being covert online. Servers operated by VPN providers are blacklisted constantly so never pay for your VPN service more than a month in advance or you could find it no longer works for the purpose you intended. 

Anyone serious about operating covertly online should consider using (a) multiple VPN's traversing several Jurisdictions and (b) using burn-boxes to perform online activity. Both solutions, again providing you NEVER EVER use the same credentials to login or the same browser, email or applications in both your local and VPN/burn-box environments can give you covert protection but I must point out that it only takes one slip-up and you will be exposed and identifiable. 

 

 

Continue reading
  1 Comment

Copyright

© 2019 GEN, E&OE

Recent comment in this post
Guest — RayO
That makes sense to me. have been sold the whole must use a vpn to protect you deal and it seems like its bollocks. ... Read More
Monday, 20 May 2019 10:50
1 Comment

How to annoy your visitors with Google ReCaptcha

Screenshot-2019-04-30-at-14.05.55

I'm not a robotFor many years now there has been a steady proliferation of Google ReCaptcha - A free service provided by Google which is used to verify that a human is actually filling out your form. It was annoying when it first arrived on the internet, but the latest rendition takes annoyance to a whole new level with poor quality images, multiple pages to select and more. So why do so many websites choose to irritate their visitors with Google ReCaptcha?

Well, firstly its free, and readily integrates with most hosting platforms. Secondly its thought to be effective and Finally for whatever reason people think it's a good idea. In reality, that's not at all the case, it is free, but there are serious privacy concerns and its not effective as it can be bypassed easily with a browser plug-in or broker service and finally I don't think there's a complete understanding of just how annoying it is especially for those on small screens or those with imperfect vision or hearing. But first let's talk about privacy as that's a hot topic these days. 

Privacy Concerns

If you click Privacy or Terms from the Google Re-Captcha box then your taken to generic Google Privacy or Terms which make no reference to ReCaptcha or what it will collect. This odd behaviour could only be by design. If you dig deeper into the Privacy Policy for ReCaptcha which is nearly impossible to find you discover the following. 

  • reCAPTCHA is a free service from Google that helps protect your website and app from spam and abuse by keeping automated software out of your website.
  • It does this by collecting personal information about users to determine whether they’re humans and not spam bots. reCAPTCHA checks to see if the computer or mobile device has a Google cookie placed on it. A reCAPTCHA-specific cookie gets placed on the user’s browser and a complete snapshot of the user’s browser window is captured.
  • Browser and user information collected includes: All cookies placed by Google in the last 6 months CSS information The language/date Installed plug-ins All Javascript objects

Blimey, who knew? After reading that do you still believe Google Re-Captcha is a good idea for your website? 

  • The Google reCAPTCHA Terms of Service doesn’t explicitly require a Privacy Policy. However, it has the requirement that if you use reCAPTCHA you will “provide any necessary notices or consents for the collection and sharing of this data with Google

But this is often if not always overlooked by website owners, in fact I cannot think of a single website using ReCaptch that actually notifies you prior to its use that your going to be sharing a bunch of data with Google just by clicking "I'm not a Robot". Let's review and expand on the Privacy Policy and what is collected...

  • A complete snapshot of the users browser window captured pixel by pixel
  • All Cookies placed by Google over the last 6 months are captured and stored and an Additional Cookie is stored. 
  • How many mouse clicks or touches you've made
  • The CSS Information for the page, including but not limited to your stylesheets and third party style sheets. 
  • The Date, Time, Language, Browser you're using and of course your IP Address. 
  • Any plug-ins you have installed in the browser (for some browsers)
  • ALL Javascript including your own custom code and that of third parties. 

So at this point, you as a website owner are obligated to disclose to your users that by clicking on the I'm not a robot re-captcha you as a visitor AGREE to all the above being shared with Google, which is not only an inconvenience but pretty much no one does it because in most cases they don't fully understand what data is being shared. This can be a real problem especially in the EU now where GDPR has caused many websites to display mandatory and equally annoying cookie confirmations, and even restricts access to a large number of really useful sites from within the EU.

Annoyance

CrosswalksTry again laterIn a recent survey conducted by GEN with our business customers we included a question about Google ReCaptcha and asked users to rate how annoying it was from 1 to 10 with 10 being the most annoying, and we came back with 94% who though it was the most annoying. Now its a small sample set of a few thousand users but it does indicate a general appreciation of the inconvenience it presents. Personally, when I see the 'Im not a Robot' box unless its absolutely critical I'll just close the page and move on to something else, and this is a view shared collectively at this office as it probably is a most. 

For those outside of the USA, a crosswalk is what the Americans call a pedestrian Crossing, in the pictures its the white lines across the road but of course in most of the rest of the world these are black and white or black and yellow. This is a regular mis-understanding as is Palm Trees which are the trees with the leaves at the top, and never seen in many countries. 

If your Not a Robot and I am certainly not then its easy to wind up with the dialogue to the right after getting a couple of images incorrect, after which your screwed and cannot continue to submit your form without closing the browser, re-opening and filling the whole thing out again. That is really really Annoying. 

Alternatives

There are a whole myriad of alternatives to Google ReCaptch, most of which are self hosted and have none of the privacy issues associated with Google ReCaptcha. The general trend these days with Captcha is that its not required anymore since form submission mechanisms have evolved to use a hidden captcha which is in fact a generated seed on the form that is passed and validated server side on submission. A robot (or bot) would want to POST the form without filling it in which this hidden captcha easily defeats. Further validation of field types can pretty much eliminate bot POSTing and removes the need for anyone to click traffic lights, fire hydrants, store fronts or any other collection if images whilst providing Google with your personal information. 

SummaryCars

  • Google Re-Captcha is not infallible and can be defeated by browser plug-ins or brokers. 
  • Google Re-Captcha has serious privacy issues especially in Europe. 
  • Google Re-Captcha is annoying to visitors and deters customers. 
  • Google Re-Captcha can present images of such poor quality (to the left) that no one can accurately guess them. 

If you are using Google Re-Captcha on your website then look for alternatives, there are many out there and many of those will not require the customer to enter anything and work silently in the background. If you have a GEN Hosted website and would like assistance in replacing your Google Re-Captcha then please raise a ticket at the HelpDesk and we'll do our best to assist you. 

In writing this article, we rely on sources from Google's website and others. We make every effort to ensure accuracy but things do change especially terms and policies so be sure to check the current status. 

Continue reading
  2 Comments

Copyright

© 2019 GEN, E&OE

Recent Comments
Guest — Baranee Bjoha
Great, I was just about to order food via UberEats and guess what... "Try Again Later" bullshit. I wonder how much business they l... Read More
Friday, 24 May 2019 12:04
Guest — Moe Badderman
reCAPTCHA is the biggest waster of time on the 'Net, but the lack of instruction for the comment form of this website is a runner-... Read More
Saturday, 30 May 2020 05:41
2 Comments

SocialMedia, Google, Bing, Yahoo, Amazon, ISP's, Government Tracking and Personal Data Leakage

After our post 'In defence of social media" which itself was a response to the disproportionate news coverage of Facebook specifically, there have been many responses generally accepting that it should have been common sense that nothing is 'free' but that there was a clear mis-understanding on how people are tracked online and what exactly is collected and by who. This isn't unreasonable because the whole tracking and collection industry is shady and insidious, and just for clarity I was correct when I said GDPR will make absolutely no difference. So, how about we look at a few specific examples of data capture from some big players in the market...


Let's start with Facebook, purely because it was the subject of recent news stories. 

ChavbookFacebook of course collects everything you feed into it, this includes you name, address, date of birth (if anyone actually uses their real date of birth), phone numbers, email addresses and so on. This data forms the root record (the record to which everything else is attached). 

To the root record we then add everything you view, everything you like or dislike, everything you post (Images, Text, Links), every message you send and receive and every ad that is displayed or clicked. 

Associations are also added, that's "Friends" and the interactions between you and your "Friends" are also logged and common interests or appearance in common photographs are also recorded. 

If you use the Facebook app on your mobile device then your location (unless you deliberately disable it) is recorded and stored. 

If you are unfortunately enough to have used your Facebook 'login' to login to third party websites then a record of that site, when you use it and for how long is also included. 

Facebook was reportedly paying people to give up their privacy by installing an application that sucks up huge amounts of sensitive data, and explicitly sidestepping Apple's Enterprise Developer program rules. This has now been brought to a shuddering halt by Apple, so thanks Apple. More information on this one HERE.

As you can see, Facebook stores pretty much everything you do and that's their business model, you get to waste hours of your life that you'll never get back and Facebook sells the data they collect from this activity. There's nothing wrong with this business model, it works and has been around for decades. 

Pinterest, Instragram(which is now Facebook), Tumblr and so on

These sites, which are generally 'image' sites record everything you add into the profile, a to that they add everyone you follow, every image you view (and for how long) and further some of these scan the images uploaded, recognise faces and then form internal relationships between the images and users. There's nothing wrong with this business model either of course, except perhaps the fact that the moment you upload your image, its no longer your image but that still doesn't stop people using these services. 

Twitter

TwatterNow Twitter has been around for a few years and is basically a 'feed' services where you follow topics and people and you'll receive updates from them. Its a simple model yet an effective one. Twitter records your posts, reads, follows and followers. It also records every link you follow from posts. Twitter inserts 'ads' into your feed which is annoying but not a show stopper and these are of course paid for by the advertisers. The rest of twitters revenue comes from selling your data to third parties which is again a good sustainable business model. In the early days Twitter was wide open to abuse where 'fake' accounts were created in celebrity's names causing unsuspecting followers to be duped and further be directed to 'donation' or 'malware' sites but Twitter put a stop (mostly) to this by 'verifying' some celebrities to remove any confusion. Twitter also allows the embedding of links, audio and now video into the feed which is great but also brings with it a new set of challenges around protecting users but also provides additional tracking metrics. 

 

Google

The Evil OneGoogle is a huge company with many 'services' most of which are 'free' to use. Let's look at probably the most common service, the "search" engine. There's no denying that Google.com is a great search engine and if your looking for something a little obscure then its your go to engine, but let's look at what's captured. 

When you Search on Google, the search term is recorded along with the results, which results you click on, and the time taken for that click. This simply makes associations of interest between your google profile (if you created one, or a unique identifier if you didn't). This in itself isn't really bad and you would expect them capture this information surely? This information (search history) is further used to focus future searches so the more you use it, the more likely you are to get more applicable results but this is the official line and don't ever believe that Google is the only search engine, its not. Because of the way Google adds sites to its index, sites with large budgets and resources always find their way to the top results even if they aren't applicable at all. Moreover, Google adjust results of political, social, personal or controversial searches to add their bias to the results you see, and many would argue that this 'bias' that most don't even realise is wrong on many levels. Some other search engines such as DuckDuckGo, etc often produce more evenly weighted results and without adding their bias which some may prefer. 

Getting back to Google the company, we need to talk about google analytics which is yet another 'free' service allowing website owners to get insights into visitors which is actually really useful, but for that to work Google needs to be able to connect YOU as a person to that site which it does easily. This gives Google not only your search queries, results, and clicks but also now most websites you visit, when you visit them for how long and what you do on those sites. Now we're starting to collect some seriously valuable data and this is of course the business model again, you get lots of free services and Google makes money from advertisers and the data. Google allegedly purchased shopper data from MasterCard which again when augmented with your online profile just adds a wealth of additional behaviour data. 

That incredibly annoying "I'm not a Robot?" - Well that little thing captures a vast collection of personal data and all you have to do is click some pictures and be annoyed by it. 

Other Services (Gmail, Google Docs, Groups, Google+, Google Drive, and so on)

Google offers a bunch of other 'free' services all of which are quite useful, but to use these services you'll need to provide your mobile phone number, which you are forced to verify by entering a code from a text message. Using these services each bring yet more data to the profile they are maintaining on your behalf. Every email you send and receive via Gmail is scanned, stored and linked. Every document you add to Google Docs is scanned, stored and added, any file you store on Google Drive is scanned Stored and added, are you seeing a pattern here? Nothing you do on any Google service is private. How about Google Maps? A very useful tool if you want to find somewhere, but yet again everything you look at is recorded and added to your profile. If you have an Android phone then your location data is also added to your profile along with your messages, apps installed, app usage, contacts and so on. Google Home is a voice assistant and speaker for your home, but again anything you ask it is stored and added to your profile data. 

YouTube (now owned by Google) again stores the video's you want, channels you watch, comments you make and so on. 

Android, the phone operating system developed by Google as open source has its own class of information leakage in that every app you install and use is tracked and unless you specifically disable it (and there's still a debate if you can disable it) then your location is tracked using your phone's GPS data. Mapping this allows Google to track all the places you visit, shops you visit and for how long. 

Google Chrome is a web browser developed by Google and is again free to download and use. Within this browser there are options to 'store' your credentials and bookmarks in the Cloud and this does then of course give Google this data to further add to the profile. We also noticed that Chrome (unlike other browsers) created several local files storing your search history, browser history, and so on for reasons unknown. The files are unprotected meaning that we (or any malicious or otherwise software) can easily read them to obtain this information. At the time of writing we also noted weak protection of your stored passwords, but this isn't specific to Chome and several other browsers are also easy to crack. 

So Google know what you search, what you view and for how long and how often, what you buy, what you look at but don't buy, how often you buy something, what you read, what you post and what posts you read, what pictures and video's you view, how often and from what websites which is what everyone expected, but wait, google recently were exposed by the EFF for using methods to bypass Apple's protection and capture users screens. Read the linked article HERE for more details. 

Bing & Yahoo

BongBing is a search engine that is pretty useless in fact and is even more unfairly weighted towards sites with $$$ and subsequently doesn't have any significant market share (about 7% at time of writing) but that doesn't mean that they don't store you searches, links clicked etc which they do. There's a 'relationship' between Microsoft and Yahoo which goes back several years and brings Yahoo results into the Bing search engine which is probably a good thing but this also brings Yahoo free services such as Yahoo Messenger, Yahoo Groups and so on into your search footprint. Yahoo itself has been bought and sold several times and the actual ownership is hard to pin down but we do know that the majority is owned by Oath inc (part of Verizon) at time of writing. 

Generally speaking the use of Bing and Yahoo is fairly limited these days with about 4% market share (at time of writing) since Bing's search results are limited and Yahoo's reputation has been shredded with past data breaches. The use of Yahoo mail brings with it the same issues that Gmail has, your email's and everything in them are scanned and stored. Microsoft's Hotmail is exactly the same and why shouldn't it be so, its free after all. Yahoo's Geocities which is pretty much dead now and Yahoo Groups, if anyone still uses them, bring yet more profile cross linking with group 'Members' being associated by topic and post and of course you must have a 'yahoo' account to participate.

GeoData

Pretty much ANY app on your mobile device, for android at least is able to track your location using your device's built-in GPS. For Apple devices it's harder but still perfectly do-able. Collecting this GPS data, as you may suspect would enable the processor of such data to be able to track your movements throughout the day. For modern laptops running windows there is also a leak of GPS data to installed programs and even webpages under certain circumstances. Apple Laptops are by default prevented from leaking GPS data but this can be overcome especially in earlier versions of MacOS. Your Car, if it has satellite navigation, records your start, end and route in its entirety and the more upmarket vehicles ship that data over the cellular network back to base. If you combine this GPS data with detailed mapping information and you can easily link GPS co-ordinates with the places (shops, schools, etc). 

Internet Service Providers (BT, PlusNet, Virgin and so on)

Some reading this may not be aware that your Internet Service Provider has access to every website you visit. They do this via DNS which is the system that converts a domain name into an ip address. Unless you specifically override it your ISP will route your DNS requests to their servers which then accumulate your website requests against your 'session' which is your current IP Address linked to your account. Using SPI (Stateful Packet Inspection) your ISP can also record what you actually do online such as listening to music, watching video, making phone calls, instant messaging, and so on. All this data is accumulated and stored indefinitely and in this country at least is made available to law enforcement without a warrant. 

Amazon

AmazonThe Amazon ecosystem is slightly different to the general model as there's no 'free' services, you need an account to be able to buy online, download books, listen to music or watch videos, but that doesn't mean the company won't collect your data because they do. Everything you search for on Amazon is stored and kept, everything you listen to, read or watch is stored and kept and all this profile data is used to target search responses and advertisements to your specific interests. Amazon don't make any guarantees not to sell your data (that I can find) so its safe to assume they probably do. Amazon also has 'Alexa' which further arguments the profile by storing what you ask and do with the devices but this in itself isn't bad and can be used to tailor responses based on your past history. The Amazon Ring Doorbell on the other hand is nothing but a storm of privacy issues. The doorbell records what it sees from your front door, continuously and that video is stored at Amazon. You, as the purchaser of the device have no rights to the data and it clearly states in the T&C's Ring and its licensees have an unlimited, irrevocable, fully paid, and royalty-free, perpetual, worldwide right to re-use, distribute store, delete, translate, copy, modify, display, sell, create derivative works, in relation to the footage taken from your front door, and you paid for the privilege. Whilst there is no law against recording your street in the UK, giving your live video to a third party who can do whatever they like with it would certainly seem to be unwise if not unlawful. With the application of face and numberplate recognition those third parties could potentially identify people walking and driving on the street which takes this to a whole new level. Can you stop it? Nope, this doorbell only works when the internet works, and when the internet works its uploading your video to who knows where. 

 Local Government & Agencies

The Department of Privacy InvasionYou may or may not know that your local council is at liberty to sell your personal data to anyone willing to pay. They call this the electoral roll but in fact its just a dump of all the people registered to vote + council tax payers. When you combine this with data from a company like Cameo you then introduce affluence and net worth, link that with Experien or Equifax and you now have credit worthiness, loans, mortgages, bank accounts and the list goes on, all free to purchase.

The DVLA is now also selling your details to companies so if you own or are the registered 'keeper' of a vehicle that data is now also up for grabs. 

And of course the Census data, that you MUST complete legally is made available for sale to anyone who wants it and this is of course why the Government is exempt from GDPR along with the Police, the Military, and anyone else who you may want GDPR to actually apply to. 

Paypal

The payment provider allows easy transactions available on many websites and vendors. Paypal collects the product, price, location, currency, and store and records this at point of sale. Whilst this information can easily be justified, Paypal are at liberty to sell this data to anyone else which further compliments your online profile with validated purchases. 

VoIP

There are an ever increasing number of "Voip" Providers, most of which are just reselling someone else's service who are actively pushing Voice over IP to anyone who will listen. There's no doubt that Voice over IP will become the norm in the future, but currently there are significant risks to its uptake. In an earlier article we showed just how easy it is to intercept voice traffic as it passes through the internet and this of course makes is really easy for anyone, government or otherwise to capture and record telephone calls. There are unconfirmed rumours that our own government is already capturing our internet traffic for analysis and of course voice traffic would be part of that. If you're familiar with the abilities of modern voice analytics then you'll know that your conversation can be quickly converted into a transcript and searched and/or archived. If you've taken up VoIP then ask your provider if they are using SRTP (Secure RTP) and you'll be told either No or they will lie to you. As it stands in the UK marketplace we are the ONLY VoIP provider offering voice encryption but be aware that even our voice encryption is only encrypted up to the point it leaves our service meaning we can ONLY guarantee voice security between GEN VoIP Customers/Sites. To many this shouldn't be a concern especially considering how much of your data is already in the wind but for some this is a serious unmitigated concern. 

The Cloud

There are two distinct flavours of "The Cloud". Private Cloud is business class internet based storage and services as provided by a myriad of providers and for those enterprise class providers you can be assured that your data, servers, containers and systems are secure and protected. Public Cloud which is often 'Free' is the sort of services provided by Microsoft (OneDrive), Google (Google Drive), Amazon, DropBox, Apple (iCloud Drive), Datablaze, Box, FlipDrive, HiDrive, iDrive, JumpShare, Hubic, Mega, pCloud, OziBox, Sync, Syncplicity, Yandex.Disk etc, and these services are absolutely NOT SECURE. This is not only because they are frequently compromised but because there is zero accountability because it's 'free' and provided 'as-is'. NO business should ever use Public Cloud services for storing business critical data. If its important to you then use a service that you PAY for and that has a degree of accountability. 

Cross Contamination

Since tracking to your personal profile is done via Fragments left on your computer, or cookies/sessions left by website's or even by your browser screen size and in a recent discovery by your sound card then allocating your activity to you is fairly good but there are some cases, especially in companies where internet access is proxied and where only a few 'login' to accounts that others activity can be falsely attributed to your or others profiles. I have personally seen this whilst writing this article when I requested all my activity from Google. Digging through it and remember I never use Google I found a bunch of searches performed as recently as earlier in the week that were from other users on the network which somehow wound up in MY profile. I have no idea how common this is in the real world. 

Controversy

There are some claims on social media that Google, Facebook and others are always 'listening' using the Microphone in your equipment, but this has largely been disproved by researchers at the time of writing this article. That doesn't mean it categorically does not happen or that it does, simply that the evidence to date suggests not. 

Obfuscation

Services such as VPN's and of course the ever popular Tor Browser are ways to obscure your real identity online, but you'll discover fairly quickly that the services above either don't work at all or are crippled deliberately. Google for example returns some made up message about unusual traffic. As VPN's come and go there will always be a short time before the services get blacklisted but this will never be a viable solution long term and as you'll discover in our article "A VPN will not save you" following this approach requires strict discipline and limitations. 

The sale of data and the data market

All of the above can produce fairly detailed and valuable profiles of your online AND offline activity but when the separate data collections are combined you start to have very complete profiles linked directly to an individual. This is what worries people more than Facebook and Google. Given that your data is bought and sold on a daily basis, some of these companies have a complete record of pretty much everything you do. Let's see what the total footprint of an average teenager today is

  • Your Name, Address, Race, Religion, Ethnicity, Phone Number(s), Email Addresses, family members, friends, loved ones, and associates. 
  • Your bank accounts and balances, credit cards, loans, and payment history. 
  • Your vehicle, make, model and registration, current tax and MOT status and how much you owe on it if anything. 
  • All Google/Bing/Yahoo searches, Clicks and All Sites visited, comments and posts.  
  • Every instant message you've ever sent or received and the content of all. 
  • All your photo's and the date/time and location they were taken along with everyone who can be identified in them using face recognition. 
  • Your location to within 5m at any time of the day and where you've ever been and for how long, how often and with who. 
  • What music, sports, products, services, video's, you like, dislike, watch, download or buy. 
  • Anything you've ever purchased or sold online, be that clothes, shoes, groceries, electronics, etc. 

I think now you must be starting to understand how the data business works and how your pretty powerless to stop it without some radical changes to your lifestyle and even then its too late for most people. Its important to be aware that these companies have done nothing wrong, nothing illegal or even shady, they are all businesses and their business is your data. I personally like Facebook & Twitter and Google is a good search engine but YOU need to make informed decisions on what services you use online, and what information you surrender to those services, because changing a few settings on their website will make ABSOLUTELY NO DIFFERENCE.

Apple

AppleWhether you believe it or not, Apple has taken a fairly adversarial approach to data protection, committing to protecting your data not only on your devices but also online with anti-tracking features in their browser (Safari), but in the scale of things and despite Apples best intentions it's not going to make very much difference in the end. The only way for Apple to make an effective dent in the data collection market would be to block all social media and search engines from users devices, which they won't do for obvious reasons and in the real world everyone has to make their own decisions on what they do and don't use. 

 

The near future

There's no doubt that data collection and dissemination is a business model that's here to stay, and you have to look at both sides of the argument. Imagine how much easier it is for our Police to be able to tell exactly who was where and when, Imagine how pattern analysis of messages and movements can identify possible crimes before they are committed, or imagine a world where your every move is recorded, analysed and reported. There's always two sides to it. 

Notes: 

Although GEN VoIP Encryption can only secure voice communications between GEN VoIP Customers/Sites, We also offer VoIP encrypted to Mobile Phones using a local App so for Company Site <-> Company Mobiles we can guarantee voice security.

 

Continue reading
  3 Comments

Copyright

© (c) 2018 GEN. E&OE

Recent Comments
Guest — kumar
Consider my suitably enlightened!
Thursday, 06 September 2018 10:09
Guest — jerald g
thank you. I've removed all my files from onedrive and will be storing them on my pc from now onwards.
Saturday, 02 February 2019 10:09
Guest — best online bingo
Monday, 21 October 2019 07:29
3 Comments

The 2017 Toyota Prius PHEV

20171207-152651

We recently selected the Toyota Prius PHEV for our 2017-2020 Fleet and after 6 months its time for a real world review. The New Prius PHEV comes in two flavours, the Business and the Excel. The former lacks many of the refinements yet has an optional solar roof whereas the latter is probably the only sensible choice but cannot have the solar roof.  

The Toyota website quotes "Fun to drive" as one of the USPs for the Prius PHEV and indeed it is much more fun to drive than the regular Prius. In Electric only mode its fast and sporty, so much so that even in damp conditions its hard to keep the front wheels stuck to the road. In Hybrid mode it performs pretty much as the regular Prius. The quoted range is 30 miles and we can achieve that if driven very carefully and without any heating but in the real world you can expect to get 21-26 miles range and in the winter its more like 18-20. When pushed the traction control doesn't seem to control anything and your left with the same understeer issues that you would expect from most front wheel drives. It would have been nice to have seen a rear motor as in the Estima for even more go and some 4 wheel handling but sadly not.  

The city drive is really good, very sedate and comfortable especially in traffic and you have to believe that this scenario is the real purpose of the PHEV. Motorway driving is good but there is significantly more engine and road noise which requires an adjustment in expectations, again, its a city car for sure. You have full control over EV or HV modes allowing you to mix/match to obtain maximum fuel economy on longer journeys. A good example here would be a 40 mile round trip that involves around 50% at 50mph, and the rest slower in the city, Select EV for the city driving, and HV for the longer faster runs and this works great. You can even 'charge' the battery whilst in HV mode should you need it. 

Once the battery is empty, your then back to Hybrid mode and this seems to regularly achieve 50-55mpg which is very respectable but overall performance is severely diminished. One point to note is that Toyota seem to have failed to match the relative throttle position of the EV and HV modes so when switching back and forth you're required to adjust the throttle which takes a little getting used to. 

Exterior

The exterior style is unique and truly stunning, and was a large component of our purchasing decision. With its quad LED Headlights and its sleek aerodynamic profile this is one of those vehicles that stands out from the rest. The alloy wheels are also fairly unique although I would have preferred some alternative options available. The vehicle is available in only 4 colours and black isn't one of them which was a shame and again more options available here would certainly not go a miss. The rear boot glass is elegant and expensive but of course lacks a rear wiper because of this, and it could do with one. 

Interior 

The interior, when compared to the previous Prius Plugin is a significant upgrade and everything feels a little more upmarket. Comfortable leather seats further enhance the experience and the cabin is quite spacious even for the larger occupant all of which enhances the driving experience. There are however a few complaints to consider, such as the dash decor that sweets to the left from the infotainment system is just a crap trap and with the sweeping dash the windscreen is hard to reach and clean but these are generally very minor issues. The cup holders are generous and easily accessible as is the Qi Charging Tray but there is a definite lack of somewhere to put your crap which now tends to occupy one of the cup holders. The storage area between the front seats is ok but the lid opens sideways and not backwards making it very awkward for the passenger to access and quite awkward for the driver. The steering wheel is smaller than most but with the power assist its more than acceptable. 

The heating however is utterly worthless. I know its an EV and I also know that EV's have poor heating but this vehicle seems to excel in poor heating. There is an option to pre-heat from the key fob before a journey but that just steams up all the windows and defrosts nothing, when you get in the vehicle you then need to use de-mist  which then starts the petrol engine so what possible benefit that is I'm not sure. At 0c outside I ran the pre-heat three times and it didn't even clear the frost from the front window let alone the rear ones. Even on FAST mode, Heat set to HI, driver only and in Power mode the heating still struggles to heat the cabin in moderate exterior temperatures. Its so bad in fact that the back and rear windows permanently steam up and this means you need the rear de-mist permanently on, which is also underpowered. There are heated seats in the front but those also seems under powered and were definitely an after thought judging by the ridiculous location of the switches (below)

But climate aside the interior is pleasant environment in which to spend your day. The infotainment system is covered separately Toyota's Touch 2 & Go Review so I'll skip over that for now and focus on something that caught us by surprise a little. The boot. 

As you can see from the picture a large part of the boot is taken up with the batteries leaving a greatly reduced cargo area. We didn't see this initially as being a problem but once you start loading it up with equipment you soon find that the back seats are lost to overflow so consider this carefully. 

Charging

The vehicle comes with a charger for a normal 13A socket which takes 4 hours to change. Additionally you can have a hard wired charger installed at your property that will charge at 16A and this reduces charging time to 2Hours 10Minutes. Unfortunately that's the fastest it will charge, even though most properties are able to supply 40A which would charge in less than an hour and this makes charging on the go a no-go unfortunately but charging at work is still do-able. 

You are able to setup charging schedules so that your daily charge can be taken in off peak times and cheaper electricity, and when you turn off the vehicle you have the option to bypass this scheduling and charge immediately if required which is nice. 

Driving Features

The new Prius PHEV comes with a wide range of driving features which I'll address individually here, but collectively its a nice package that is rarely seen on a vehicle of this price point.

HUD (Head Up Display)

The Prius has featured a heads-up display for many years and generations but in this model the display is further enhanced and very visible. It's also a colour display which is great except that the normal display is in monochrome, but I assume to be as clearly visible as it is a single colour is beneficial. The only downer for this feature I can see is that the SATNAV is *not* replicated to this display as it is in most, if not all other vehicles with a HUD. 

Automatic High/Low Beam Headlights

A well thought out system that works in the majority of cases even if its a little slow to react sometimes and it only works faster than 40mph which can be annoying. The system is activated by a switch located near your knee which is unfortunate making it a distraction to turn it on and off. Overall however its a good system as long as you understand its limits. 

Radar Cruise Control

Not so well thought out and the sort of system that seems to work great right up to the point where it quits working as you're approaching the vehicle in front at speed, which it does. Further when you are trying to engage it, it just won't engage for some reason and gives no feedback or reason why. It seems to work well in queuing traffic but again occasionally just quits working for no reason. When it quits working the warning is tame and often missed leaving you to discover that its not going to brake for you at the point when your thinking 'why isn't it breaking'. Another annoyance is that it constantly feels the need to display pointless images and messages on the dash obscuring key information and you cannot turn that off. On roads with corners, not that we have any of those in the UK it seems to regularly loose track of the car in front and accelerate then spot it and brake again usually in the corner which can be worrying and is just bad implementation. So overall it works, but you've got to be supervising it at all times and preparing for its failure. 

Road Sign Recognition

It does, but it doesn't. Road sign recognition is probably a good idea and I'm sure it works great in Japan but here it either gets it wrong or misses the signs altogether. Turn it off and move on. 

Collision Protection

Well, this kinda works and if you're using the radar cruise control then you're going to get a chance to test this from time to time. The only problem here is that when its activated and it detects an imminent crash it displays BRAKE in red on the far left dash accompanied by a fairly feeble beep that serves no purpose. Ideally for such a function to be effective it should BEEP loudly and flash everything so the driver is immediately aware that they need to take action. 

Lane Departure

This works most of the time although it can become very annoying after a short time especially on country roads where the road markings are not so clear. On the motorway however it seems to work great. There is a button on the steering wheel to switch it on and off which makes managing the feature very easy.

Automatic Parking

Well, this is one of those features that does work if you have the patience to let it do it or if your not able to park yourself. For me its a gimmick that will never get used except to test it because I can park and I can do it much quicker and more accurately, but some may find this feature of use. The vehicle does have all around sonar so parking by ear is easy to do yourself.

Driver Information

The Prius boasts two 7" displays that form the digital dashboard display and it does have all that the regular Prius has but seems severely lacking in driver information for EV mode. It does show the average MPG and average Kw/H but for a single journey you cannot get the Kw/H used or regenerated nor can you get Kw/H remaining. Furthermore on the infotainment display you can get regenerated power whilst in Hybrid mode, but in EV it shows nothing. The 'battery gauge' is confusing and the Toyota manual does a bad job of trying to explain it. 

Its as-if the software was tweaked slightly to make it work with the EV but they couldn't be bothered to add the key functionality and data that you or I might want which serves to detract from an otherwise good vehicle. To take it further all this data that's collected cannot be downloaded or exported anywhere even though there's a USB port which for a business makes it hard to track mile performance metrics. Ideally you would want to be able to download a record of Kw/H used, regenerated and fuel used which would give everything needed. I know that Toyota don't expect to sell that many PHEV's but for the price they could at least dedicate some time to driver information. 

The Economics

There's a lot of talk around the economics of EV's over conventional fuel vehicles, but its really down to your driving requirements and some math has to be done to work out if its going to be worth the extra costs so let's do that now. 

Assuming that we take the purchase cost, grant, servicing, MPG, etc from the official Toyota website and throw in servicing and tyres then we're going to get a total cost of ownership over 5 years of £31670 for the Plug-in vs £30470 for the standard Prius excluding any finance charge (because finance varies significantly so we're going to assume here that you purchased it outright). 

Next we need to know the driving patterns for the year, and initially we're going to consider 15k miles per year, with an average journey of 30 miles, that's around 500 journeys per year. I'm going to take the EV range at 25 miles as a year average, and the cost of fuel at £5.50 per gallon and electricity at 0.13p/Kwh. Given that we can calculate the fuel and electricity costs for your 500 journeys which is

£412.50 per year for the Plug-in vs £1586.54 for the regular Prius and that's £2062.50 over 5 years for the plug-in and £7932.7 for the regular Prius. 

That brings the cost of travel for your 5 years to £33732.50 for the Plug-in and £38402.70 for the regular Prius showing a saving over 5 years of £4670.20. 

So, if your'e a 15k a year driver running an average of 30 miles per journey then your going to be a winner with the plug-in. For business however we'd need to consider an average mileage of around 60k, and an average journey of 150miles so let's do the math.

£5130.00 per year for the Plug-in and £6346.15 per year for the regular Prius. Again we'll add in the cost of ownership to give a 5 year travel cost of £57320.00 for the plug-in vs £62200.77 for the regular Prius giving a nett saving of £4880.77. 

So, on a scale of economy the Prius Plug-in is a clear winner for both domestic and business travel with the benefit being significantly greater if you can keep your average journeys to 25 miles or less, and of course be aware that we're using Toyota's values here and these may not be real world applicable. I'll add these figures into a table below to make it easier to see. 

Vehicle Miles/Year Average Journey Cost of Ownership Fuel costs / Year Total Cost of Travel / 5 Year
2017 Prius Plugin Excel 60000 150 £31670 £5130.00 £57320.00
2017 Prius Excel 60000 150 £30470 £6346.15 £62200.77
      
2017 Prius Plugin Excel 15000 30 £31670 £412.50 £33732.50
2017 Prius Excel 15000 30 £30470 £1586.54 £38402.69

Final Thoughts

I personally like the car and I like driving it especially in Electric only mode but some may find the greatly reduced cargo area combined with the lack of colours and options too much of a stretch. It is in my opinion a far better option than the Ampera/Volt (which we had before the PHEV's) because its more fun to drive, more comfortable and more stylish. You will also find some incentives available at your local Toyota dealer which can make the relative premium more manageable. 

There is a wealth of information on the Toyota.co.uk website but be aware that certain parts of it do not work, like 'My Toyota' which just gives you a blank page when you try and login so be aware of that. 

 

 

 

Continue reading
  0 Comments
0 Comments

Synology Hyperbackup and Certificates

Hyperbackup is a backup system provided by Synology on their Diskstation and Rackstations and its a good product as is the hardware, but like most things in Synology, the term "set it and forget it" does not apply as this customer found out to their detriment. 

The Synology NAS system has a web interface, which is in fact very good and well designed, it allows amongst other things for you to setup an SSL certificate to encrypt web traffic. This can be a self signed, purchased or lets-encrypt certificate and in the latter the process of renewal is automated which is nice. 

The problem comes when your SSL Certificate changes, which is would normally do annually for a purchased cert or every 90 days for lets-encrypt, at which point everything breaks including Hyperbackup and the cause isn't immediately clear. The dialogue above indicates that the destination for your backup is offline, you would of course check the backup server and find it online and running. You would check the firewall settings, probably restart the services maybe even reboot the server but nothing is going to make this work again until you go into settings and get as far as target at which point you notice...

Yes, seriously, because your certificate renewed and even though you've specifically not enabled transfer encryption the backup process crashes to a halt. You are required to press "Trust Server Certificate" to continue after which the backup will resume until the next certificate change (90 days for lets-encrypt, a year for purchased). Why? What possible purpose can there be to halting the backup every time a certificate renews? and why is there no way to prevent it? 

Just as a side note, other things that break are all the iOS applications, Cloud-Station Backup, Cloudstation, and probably more. If you are going to use a lets-encrypt certificate, and I would encourage you to do so, then every 90 days you need to make a note in your diary to go to all the servers and click all the buttons or stuff will stop working. 

Update 19/09/2018: Just had another new customer today who's had a volume crash and his hyper backup stopped working because of this about 6 months ago, so we're now in the position where he's shipping the unit back to us and we're going to have to attempt volume recovery. PLEASE CHECK YOUR HYPERBACKUP IS RUNNING REGULARLY

Update 20/01/2019 - Synology released an update that effectively FIXES this who scenario by allowing you to ignore certificate errors/always trust. We're briefing this out to our base and recommend you re-visit your Hyperbackup client and make the change. Nice one Synology! 

Continue reading
  0 Comments
0 Comments

Whois Information Fraud

02_thief

A very long time ago when the internet was young, someone had a great idea that rather than remembering 192.168.111.245 we could use a sensible name that people could remember like "email" and this was called its hostname and these were stored in text files, but that wasn't good enough and so this concept was further developed into what we now know as the Domain Name System. The Doman Name System (DNS) that we know and use today is basically the same; we have top level domains such as com, net, org, uk, us, eu, and so on, and under these registries administer the second level domains. 

An example would be gen.net.uk. In this case the top level domain uk is administered by the registry Nominet. If someone wants to view our website (this website) then upon entering it into their browser their computer will ask the top level name servers who's responsible for uk and be given Nominet. Then Nominet will be asked who's responsible for gen.net and that will be GEN, and finally GEN will be asked what's the server address for www. All this magic happens without any user involvement and takes fractions of a second. 

This article is specifically targeted at the registries, in the example above it was Nominet, but every country has at least one registry and with the expansion of top level domains into things like .email, .digital, .academy etc there's now even more registries that are not country specific.

When you register a domain name with a registry, they will require you to provide information such as the owner, their address, phone numbers, email address and the same for the administrative contact, Technical Contract and Billing Contact and this information is publicly available for anyone to access via a service commonly known as WHOIS. You can use our WHOIS tool on the GENSupport website to find out what information is available for any domain. Some registries allow certain information to be hidden for an additional fee, and others don't. Nominet for example will now allow information to be hidden even for an additional charge unless the registrant is an individual. Having all this information publicly available when there's absolutely no reason to do so presents fraudsters with a virtually unlimited target base with a perceived credibility greater than the usual daily scam emails. We'll look at one common fraud that regularly hits the HelpDesk here at GEN. 

Whois Information Fraud

Now that's sounds quite important and for companies who don't have their own dedicated IT department or who haven't outsourced there's an information vacuum that the fraudsters leverage with such scams. This particular one is quite expensive at $86 but even so I've no doubt that some smaller companies will pay it under fear of loosing something they need without fully understanding the implications. This example is just one of many such scams all with different wording and layouts but all trying to take your money for something you don't have.

Let's first look at how it got here...

Received: from reliance.gen.net.uk ([127.0.0.1])
	by localhost (reliance.gen.net.uk [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id JRVvXwltlucK for <This email address is being protected from spambots. You need JavaScript enabled to view it.>;
	Sat,  9 Sep 2017 22:07:33 +0100 (BST)
Received: from mail.szjdyd.org (j115-58.sjc1.ethr.net [216.224.115.58])
	by reliance.gen.net.uk (Postfix) with ESMTP id 7E93D5F085
	for <This email address is being protected from spambots. You need JavaScript enabled to view it.>; Sat,  9 Sep 2017 22:07:29 +0100 (BST)
Received: from ([127.0.0.1]) with MailEnable ESMTPA; Sun, 10 Sep 2017 05:07:26 +0800

So it originated from a host in the USA, namely j115-58.sjc1.ethr.net [216.224.115.58] which is operated by Ethr.Net LLC and all the information on this scam is taken from the WHOIS information for the domain in question, we know this because of the information in the fraudulent email. If we look at the 'Secure Online Payment Link' which in this case goes to "bit.ly/2wOlh4L" but that's just a redirector (a website who's only purpose is to direct you to a different site) which directs us to "www.whoisworks.win" and we're presented with a set of options to pay money. What is moderately entertaining is that the WHOIS information for this domain isn't obscured in any way and we see that the owner of the domain is 

Registrant Name: wu zhiying
Registrant Organization: wu zhiying
Registrant Street: cuixiangjiedao635hao
Registrant Street:
Registrant Street:
Registrant City: zhuhai
Registrant State/Province: Guangdong
Registrant Postal Code: 519000
Registrant Country: CN
Registrant Phone: +86.75638971201
Registrant Phone Ext:
Registrant Fax: +86.75638971201
Registrant Fax Ext:
Registrant Email: This email address is being protected from spambots. You need JavaScript enabled to view it.

Which could well be made up but moving on, the Payment Link from the website which doesn't even use SSL just takes us in a loop capturing card details for the fraudsters to sell or use or both. 

Until someone actually decides that making this information public is a ridiculous idea then the endless scams will continue and we're stuck with workarounds.  

Whois Privacy Options

Assuming you don't want to publicly broadcast your name, address, phone number and email then options are limited to a whois privacy service such as the one that we offer, which simply registers the domain using a subset of our details therefore directing scams to us instead of you. This means that we need to 'administer' the domain by responding to the nonsense sent by registries from time to time but we don't mind doing this for our customers and change nothing for the service. Other Providers do charge but it's generally a fairly nominal fee of around $5 per year. 

Know Your Domain & Services

When you have one or more domains then there will be an annual registration charge which will be invoiced directly to you by your registrar. If you registered through GEN, or migrated your domain here then we'll send you an invoice yearly. There are no other annual charges for the registration of your domain name.

If you have services on that domain name such as a website and email then charges for these, which are usually annual will be invoiced to you directly so know who hosts your website and provides your email services and if your even in doubt then ask them before paying anything that arrives to your inbox unexpectedly and never pay for something if your unsure. If you are a current, past or future customer of GEN then the HelpDesk is available 24/7 to answer your questions to please ask. 

GDPR and the Chaos Factor

Since writing this, many if not all registrar's have cashed in on the GDPR (Global Data Profit Regulation) by offering to hide your information from the public whois, usually for a fee ranging from $3 to $10. Whilst this is great and many have taken up the offering with some registrar's even providing it free, this move has now increased the value of whois data that is now being traded online from companies who scraped the whois before it was restricted. This means, in effect, that the GDPR & whois privacy is only effective for domains that are newly registered. Any domain name registered prior to May 2018 is already scraped and the data available for sale so paying an additional fee to hide it is just money down the drain. You are absolutely no more protected now than you were before, and you will still receive fraudulent demands for payment that you need to be aware of, and ignore. 

 

 

Continue reading
  0 Comments
Tags:
0 Comments

Synology Auto-Update

synology_logo

We've been actively promoting Synology Rackstations for many years now and they do provide exceptional performance for our customers, but they also come with a few gotcha's that you need to be aware of when running them. If you have managed storage or any of our support or outsourcing services then we'll take care of these units for you, but if not then please read on. 

Auto-Update is an important part of any strategy and of course Synology provides the same functionality which can be found in Control Panel / Update & Restore / Update Settings

Here we have updates to be applied automatically at 3am when available. This will mean your system will always be up to date with the latest patches and fixes. 

A second level of protection comes from the package centre auto-updates which can be enabled in Package Centre / Settings / Auto Update and will look something like...

But you can never leave your Synology servers to just update themselves without intervention as we've discovered today, for example when we found that all our customers who have managed storage were showing package updates available (via CMS) but they weren't auto-updating. We investigated this further and found that Synology have made a change that seemingly effects everyone ... 

When opening the package centre from DSM on the server you find this dialogue 

and of course all the updates have stopped auto-updating because of this.

Now we have 300+ Synology Servers on management and so far today we've only managed to do a fraction of that, but over the next few days we'll login to each of the boxes, tick the box and then let auto-update do its thing. If you are using Synology NAS then double check this now and make sure you've got it ticked, then apply any outstanding updates.  

 

 

Continue reading
  0 Comments
Tags:
0 Comments

USB Flash - Built in failure

s0404080_sc7

With the slow decline in CD's and the long lost days of floppy diskettes, USB portable storage has become common place. A memory stick, thumb drive or pen drive are common terms for the same thing, a USB mass storage device based on FLASH, and yet many people don't know that the whole technology behind FLASH storage has a very limited lifespan - this leads me on to the relatively high volume of data recovery requests we have for USB storage coming through the channel.

Flash memory is generally of two types, NAND and NOR. Both technologies allow permenant storage of data without needing a power supply. NAND requires data to be read and written in blocks called 'pages' and is by far the most common FLASH memory in use today.

FLASH memory like all memory stores data in 0's and 1's in a vast array of cells, but the method by which the data is permanently written involves pushing a charge (electrons) through an insulated layer, once through the insulator its stuck there and will remain until its pulled back through the insulator therefore changing the state.

However, this 'pushing' and 'pulling' through the insulator, known as tunnelling slowly breaks down the insulator until it fails. When an insulator fails this only effects the cell, but of course just one bit that won't switch will adversely effect the data when read back. Furthermore certain areas of the flash drive are read and written much more than other area's and these are the master directory and the File allocation tables, both of which are changed when data is read (changing last access time) and written (changing last updated time and changing allocation of storage in the file allocation table). This means that in many instances the part of the flash drive that fails first is the most important part - the part that tells us what files are stored on the drive and where they are stored.

Cheap vs Expensive

When it comes to Flash Drives, there is a real physical difference between the budget end of the market and the professional end because NAND/NOR Flash comes in many different flavours depending on its performance and expected lifespan. Often the cheapest FLASH IC's are designed for storing firmware in embedded devices where write performance is a non issue and the expected number of writes is very limited, maybe 10 writes in its entire lifetime whereas the most expensive FLASH is designed specifically for high speed  and many write cycles and this is the correct hardware for USB Flash Drives. If you can buy a 128GB Flash drive from SANDISK for £30 and a unbranded one for £5 then the lifespan and performance of your SANDISK drive will be many many times better than the unbranded one.

I guess I should also point out that some cheap unbranded USB Flash drives (or knock off Branded) are engineered to falsely report their capacity. This is done by creating a partition on the drive with false data, so the computer you connect it to thinks its larger than it is and the only way to be sure is to try and fill it up or to perform a low level reformat. This sort of storage fraud is often seen on sites like eBay promising 1TB of flash for $10 which is nonsense.

Recovering data from failed Flash drives isn't that hard, but it does bring with it some challenges because the data will have errors in it where specific cells are stuck or indeed entire pages are stuck and non responsive and its not always possible to identify these area's during the scan, they often read as ok but with incorrect data, or they read as all 0's but after re-assembling the filesystem as best we can its over to the client to work through the recovered data and validate it.

The bottom line here is never ever rely on a USB Flash drive for data storage, its not safe and certainly not guaranteed and it will fail at some point. Stick with brand names and stay away from the budget end of the market.

Continue reading
  0 Comments
0 Comments

Synology CloudStation in the Corporate Environment

Synology CloudStation in the Corporate Environment

If you've invested the time and money into Synology RackStations then your probably going to want to take advantage of some pretty cool embedded features. One such feature is CloudStation and its associated CloudStation Sync and CloudStation Backup, which collectively allow for realtime'ish local file synchronisation with a server which provides up to date files for remote users, a multiversioned backup for desktops and laptops and realtime sync between servers across sites. There is however one serious flaw in the plan that you need to be aware of before you go and roll this out across the business and that's SSL. 

When you setup your RackStation(s) you probably setup SSL and would have used the build in 'LetsEncrypt' support which promises a valid certificate every 90 days or you would have installed a paid certificate which renews annually in most cases. Having setup your SSL certificate you would of course want your clients to use SSL when connecting to the server so the transfer is a little more secure, but here's where it all goes down the tubes; If you did make the mistake of selecting SSL when you setup the clients then every 90 days (or annually) all the clients are going to silently stop working and no one is going to notice for a while. 

If a user actually opened CloudStation Backup to restore a file then they will be met with

And should they click on Version Explorer they get the equally helpful...

In fact there is no way out of this without going into Settings then Connection and re-entering the User/Password and Applying,  and in a corporate environment the end user may well not be privy to the Synology User/Password but even if they were its now too late because the CloudStation Backup hasn't been backing up since the last certificate renewal. The ONLY way around this is to turn off SSL or you'll be back here again before you know it. It's a real shame that you cannot use SSL as it's a nice feature but in a corporate environment its not essential unless your allowing remote sync.  

I have no doubt that Synology will resolve this in due course, but until then keep SSL off to save a bunch of time and effort.

Continue reading
  0 Comments
0 Comments

eMail Security and Retention

internet-security-concept-19461118

I was asked a few days ago by one of the Partners if we could retrieve an email from a year or more ago and of course the answer was no, but that left me thinking about the question itself and the wider implications. I think its pretty much understood that if you choose to host your email at Microsoft, Google, BT, and so on then your every email is going to be archived away somewhere for all time and will no doubt be available for anyone with sufficient clearance to review, trawl, analyse and so on, but that's fine as long as you know its happening. At GEN we offer a secure service which by its very nature is not archived anywhere unless that functionality is specifically ordered by the customer, and that's rarely the case, but we do take backup's so I think its important to define exactly what we do, and what we don't do here. 

 

Your email is stored in an encrypted format on the physical server media and the key to decrypt this format is different for each mailbox. 

There is a snapshot of the entire server cluster taken hourly on a 96 hour rotation. That is, the oldest snapshot we have is 96 hours. These snapshots are taken as part of our disaster recovery process meaning that even if an entire datacentre was destroyed then your email service would resume shortly afterwards at a backup site which is always in place. 

Your mailbox is protected to some degree from brute force attacks by a system which actively monitors such behaviour and blocks attack routes in real time. 

Server free space is defragmented daily as an overnight process. 

Logging of email traffic including date/time, sender, recipient, size but not its contents exists for 7 days on the anti-spam and anti-virus gateways and for 3 days on the mail servers themselves. We use these logs to satisfy all those tickets that people raise complaining that their email isn't reaching someone or that someone trying to send them an email isn't getting through and so on. 

So, unless you specifically ordered email retention then when you delete an email its gone from the email server immediately, from our logs 7 days after receipt and from our snapshots within 96 hours. 

Keeping your email secure...

If you consider that when you send an email from A to B then the following are involved: 

  • Your PC, has to store the message to be able to send it
  • Our server, receives the email from you, stores it in your Sent Items (Encrypted) and then sends it on to the recipients server
  • Recipients server receives the email from us and stores it on disk, maybe in the clear and then stores it in the recipients mailbox. 
  • The recipients PC retrieves the email and stores it on disk, maybe in the clear

So there are many points of compromise here and some of the most vulnerable are on sender and recipients PC's. To completely remove this risk use only webmail or an email client that stores your email with strong encryption. 

We've already covered our servers, but the recipients server(s) are a real risk too. If the recipient is using a server which does retain everything and you wouldn't know without checking then your email is once again going to be stored for all time. 

Any way around this? 

To keep your email as secure as reasonably possible between sender and recipient they

  • Should be on the same server which then negates the risk of a second server with unknown retention and security and also negates the risk of a man-in-the-middle attack by anyone compromising your DNS. 
  • S/MIME or GPG should be used to provider a second layer of encryption to further protect the email's contents and in the case of S/MIME this will also provider validity guarantees. 
  • Webmail only should be used as these will not store a copy of the email on local devices
  • A secure access service such as GEN SAS can be used to ensure an encrypted tunnel into the GEN Infrastructure and onto the Mail Servers. 

But who needs that level of security? Well, anyone who wants their email to be secure and that might be you or you might be happy knowing that everything you have ever sent and received is stored and archived somewhere. 

I hope this has cleared up any confusion around retention of email data, if you have any more questions then raise them at the HelpDesk ok. 

 

 

 

 

 

 

 

Continue reading
  0 Comments
0 Comments

Browser Cache, Transparent Proxies and more

Browser Cache, Transparent Proxies and more

One of the questions that comes up time and time again on the Helpdesk is, what is my cache, where is my cache and what am I supposed to do with it? 

Well, the question itself often arrives on the back of conversations with content providers and developers often around out of date content so its worth taking a few minutes to explain what the cache is, where it is and why it is. 

A cache, pronounced "Cash" is masterfully defined as "A hiding place used especially for storing provisions." or "A place for concealment and safekeeping, as of valuables." and that's not too far from the truth. The cache is indeed a place for storing provisions of the digital kind. You see the internet isn't anywhere near as fast as you experience it from a browser on your PC, and this is because the internet is just a collection of many different networks all connected together to provide a 'route' from your PC to the server at the end of a browser request. Let's look at this in more details now: 

When you type a url into your browser, for example http://www.gen.net.uk and press enter or go, the browser uses the operating system of your device to open a connection to www.gen.net.uk on port 80 (port 443 if https://) and request that page. The actual request sent to the remote server looks like this "GET / HTTP 1.1" which means get the page at / the default or index page and use HTTP 1.1 which is just a specification. The response from the server will be a HTML page which the browser then displays to you as the client. 

Now where does caching fit in here? Well, your browser when it receives the HTML page stores in locally in a cache (which is just a hidden folder on your pc) and with that it stores a date and time the page was retrieved. Now if you close the browser, open it again and again type in http://www.gen.net.uk then this time something magical happens; The browser realises that its just been to www.gen.net.uk and just received the page at / so rather than bother requesting it again it just returns the one it stored a few moments ago. Simple and fast right? 

Well, it get's a little more complex than that because the server when returning the page to the browser can in fact indicate whether or not the browser should cache it, and if it should then it can specify for how long the browser can cache it and indeed the page at www.gen.net.uk/ at the time of writing does not give any special instructions to your browser around caching. 

So, hopefully that's a little clearer, when you type in a url or follow a link if your browsers already been there recently then you'll get the cached version rather than the 'live' version unless the site specifically told the browser not to cache. This really becomes visible if you have your own website, and you or your developer has made changes but you just can't see them, its all in the cache. Clearing the cache is simple enough and can be found in your browsers menu's should you require it and issuing repeated refreshes (CTRL+R windows, CMD+R Apple) will also force the browser to reload the live page generally. 

Now as I said before the internet is no where near as fast as you experience it, and this is not only due to your browsers magic cache, its also due to internet service providers (mostly residential) using systems called 'transparent proxies'. This is another cache between you and the sites you browse and this cache is not optional and in many cases will not yield to servers requests not to cache. The transparent proxies intercept your requests as you make them, look to see if they have a copy of that page and of so serve it up as if it came from the server itself. Your browser has no idea its not a live page and neither do you. By using transparent proxy caching ISP's (Internet Service Providers) especially residential can significantly reduce the amount of bandwidth they use on their upstream (between them and the server). There are also, in this country at least, significant privacy concerns around transparent proxying because your ISP not only intercepts your requests but can keep a log of them tracked back to your IP Address, and therefore back to you so its a bit of a double whammy. There is a third layer of caching known as web accelerators that are sometimes used at the server side to speed up performacne by keeping a cache but this is under the control of the site owners and as such isn't an issue. 

How do you defeat this transparent proxying ? 

Well its not easy because the ISP has access to all the traffic you send and receive and can easily intercept not only your web requests, but your email too, although if your email is stored at Microsoft (hotmail, office 365 etc), google (gmail, etc), Yahoo, AOL and so on, then its already compromised many times over and this really isn't going to make any difference. There are however tools that can cut through the proxies by establishing a 'tunnel' between your browser and a server in another country and from there making browser requests and I am of course talking about VPN's, the most common of which is the Tor Project (https://www.torproject.org/) but having said that, the tor project based in the USA is probably not going to be filling you with overwhelming confidence in the privacy of your data but its the best we've got unless you want to spend some real money in which case you can establish real VPN's to real secure proxies and have true anonymity online. 

I think its also worth mentioning that browser plugins such as Addblock, Ghostry, Web of Trust to name a few and of course Microsoft's own 'safe browsing' nonsense also hijack every URL you visit and pass that url back to central servers somewhere giving them also a full history of your browser habits but by themselves they can't tie that data back to you personally. That is, they know that a PC on the internet with a unique ID visits these websites but without help from your ISP they can't tie that information specifically back to you as a person unless of course you login to your Facebook, Google+, twitter and so on using the same PC in which case they can now easily tie your browsing habits back to you personally the only difference is that your ISP has your postal address and generally people aren't stupid enough to enter that sort of thing into Facebook, google+ or twitter. 

So here concludes this little discussion around caching that has taken a sideways step into privacy and anonymity but its all connected of course. 

Continue reading
  0 Comments

Copyright

© GENADMIN

0 Comments

We could eliminate SPAM tomorrow if...

We could eliminate SPAM tomorrow if...

We are all familiar with SPAM, its the huge volume of unsolicited crap that we have to wade through each day just to do our jobs, and yet there's no sign of it going away despite us all having the means to end it. So let's look at why we are all being subjected to the spam and then we'll look at why we don't end it when we all have the power to do so. 

The reason for SPAM

SPAM has three basic objectives and in order of volume, 

  • Firstly the majority of SPAM is an attempt to infect your workstation, laptop, tablet etc with a virus and/or trojan. By doing this the spammers have (a) the ability to scan your system for card numbers, passwords, and of course email addresses from your email client, (b) steal the login credentials for your email account so they can use it to propagate more spam FROM YOU, and (c) in order to leverage DoS attacks. 
  • Secondly, Spam will attempt to impersonate an organisation that you might expect an email from and then trick you into giving up your login, password, account and so on by taking you to a fake website. Whilst you may think most people are weary of this type of spam you would be surprised how many we still get at the helpdesk. 
  • Finally, Some spam can actually be trying to sell you something, which is rare these days but does still happen. 

Current SPAM defences

  • The blacklist: A number of worthy organisations like Spamhaus, SpamCop, etc are dedicated to maintaining lists of domains, hosts and subnets which are used to originate spam. Using these blacklists is an expensive but effective tool to eliminate a good percentage of spam at the first gate. Blacklists however are not realtime, and there is always a delay between a spammer launching a mass mailing and the blacklists listing it. 
  • Authentication: Several technologies exist to verify sender domains and hosts such as SPF & DKIM and these can serve (where used by the receiving server) to block spoofed spam which constitutes the vast majority of scams. For example, the HMRC who are under constant attack from scammers specify in their SPF records two hosts that are allowed to send email for @hmrc.gov.uk and of course the spammers cannot originate email from those addresses so SPF wins the day and any email coming from, say This email address is being protected from spambots. You need JavaScript enabled to view it. that doesn't come from the two hosts listed in the SPF record are canned. This however all falls down when the receiving server doesn't check, the sending organisation doesn't use it, or the sending organisation has been compromised.
  • DNS: The domain name system is that which coverts gen.net.uk to 212.140.242.10 and back again, and when you send email to someone @gen.net.uk DNS gives up the address of the mail server that is designated to receive that email, in this case farpoint.gen.net.uk. The RFC1124/1124 which form part of Internet Standard 1 specify clearly that every host on the internet should have forward and reverse DNS, that is gen.net.uk to 212.140.242.10 and 212.140.242.10 to gen.net.uk. So, when a host 'spammer.com' connects from 212.140.242.50 to our mail server, we (a) check that 212.140.242.50 corresponds to 'spammer.com', that 'spammer.com' has a valid MX record and that the host listed in the MX record actually exists on the internet. This is particularly hard for a spammer to forge and therefore this check eliminates a percentage of spam as well as a percentage of legitimate email from companies who don't know how to setup very basic DNS correctly. 
  • Content Filtering: By far the most effective tool at eliminating spam which passes all the above tests is pattern matching. This involves looking and detecting elements in the body of an email and assigning a score to each detection. An example would be a HTML only email which scores 3 points, external links to pictures which scores 0.2 points each and so on. The more spammy the email the more points it will accumulate and once a threshold is reached the message is flagged as spam. Content filtering can make use of content lists which are maintained by third parties and provide known phrases and content to score. 
  • Bayesian Probability Filtering: A gross simplification of this would be that email which is known to be spam can be 'learned' and that data used to identify 'similar' spam. The area of mathematics is complex and the techniques even more so, but the result is the same in that spam that looks like spam based on learned data can be flagged as such, usually by giving it a score, such as +10

And with these methods we can and do filter around 80% of your spam, but its never ever going to be 100% because SPAMmers spend a great deal of their time trying to circumvent these filters likewise costing us a great deal of money to continually adapt the filters for maximum effect. 

BUT, we do have the ability to stop the SPAM completely, 100% total removal of spam so why don't we? Well, quite simply we cannot because in this day and age everyone's an expert when of course they aren't. Using the current standards, and systems we could easily: 

  • Eliminate the source of SPAM by authenticating the source of all email both by using DNS and SPF. This would mean that email can only be sent if it originates from an authenticated server and if all the ISP's got together an setup their systems in this manner (most already do) then spammers would ONLY be able to send spam by compromising users email credentials. That's going to immediately eliminate 67% of SPAM. 
  • Use the tools we all have available to track, trace, and block email origination 'out of zone'. That is, for every email account the email server will ONLY accept email from the senders company LAN, or their country of residence. This kind of geolocation limiting is already built into all the modern mail systems, but its rarely used. 
  • Use anti-hijack detection to automatically flag accounts that are likely to be compromised by looking for unusual email activity. For example, if a mailbox normally originates 50 email's a day and then suddenly originates 50 emails a minute then we have the systems to automatically block that behaviour until the mailbox owner contact's us.
  • The use of S/MIME certification, which is free for individuals, and only a nominal charge for businesses not only provides transparent encryption of business email, but also provides authenticity to every recipient, so that when you receive an email from This email address is being protected from spambots. You need JavaScript enabled to view it., it comes with a 'seal' that confirms the email came from fred at bloggs.com. We've used these for the last decade, but we're pretty much alone in this. 

So, it doesn't sound that hard does it? Well its not, but unfortunately as an ISP with many customers there are always going to be the few who effect the many as in many business models. No matter how much you promise your customers a spam free life, a minority of customers don't want to hear that fredbloggs inc doesn't meet the standards and/or is blacklisted and therefore cannot send them email, they just insist how important it is that fredbloggs inc can email email them. This creates a real problem for ISP's who technically want to kill spam as promised to their customer base but are also aware of the real world cost of dealing with ticket after ticket of 'I can't receive email from xxx' and the time and effort spent identifying the sender doesn't comply or is blacklisted then trying to explain that to the customer.  

So our approach, which has been adapted over the years is to offer three levels of protection: 

  1. No Filter - All email is accepted regardless. All Spam and Viruses are delivered untouched. 
  2. Basic Filter - Some filtering is done, but spam is still delivered with [SPAM] in the subject line allowing customers to filter that into a spam folder if required. Some antivirus protection is enabled. 
  3. Max Filter - All the above fully enabled and active both Anti-Spam and Anti-Virus. 

And as we expected the vast majority of business and corporate customers opt for the Max Filter, with only a very few opting for other options. The customers who opt for and stay with the Max Filter understand the issues and stand with us on the fight against spam. If a sender winds up blacklisted then they don't tell us, they tell the sender to sort it out. 

So what's the future? Well unfortunately as it stands with some ISP's favouring an easy life rather than deploying the available protections, with players like Microsoft and Google seemingly doing nothing to limit the spam they collectively originate, and with senders especially in the less advanced countries not able to configure even the very basic standard requirements we're going to be up to our armpits in spam for a good while to come but I do feel that things are changing as we're already seeing customers migrating to us solely for the benefits of our protection systems and that means we're doing it right. 

There are a number of articles on Blacklists, SFP, DKIM on our FAQ as well as the internet standards 1 RFC's. They are all technically orientated but available for anyone who's interested. 

 

Continue reading
  1 Comment

Copyright

© (c) 2017 GEN Partnership, E&OE

Recent comment in this post
Guest — cjm
Agreed, the lack of technical standards enforcement is the very reason we ALL have to suffer the endless onslaught of spam.
Wednesday, 18 January 2017 17:04
1 Comment

Apple Wi-Fi Assist and Mobile Data Charges

Today at the HelpDesk we were dealing with a corporate customer who was experiencing HIGH mobile data charges and wasn't able to pin down the cause. We had a pretty good idea of the cause and this was confirmed when we took a look at one of the mobile handsets with high usage. In IOS 10 Apple introduced a new 'feature' called Wi-Fi Assist which is supposted to increase mobile data reliability for customers with poor wifi, which is great, but the issue is that even if you make sure you only use traffic intensive App's like YouTube etc when your on wifi, with WiFi Assist enabled the device can and will use mobile data (without telling you) if your wifi signal becomes weak, and that's ok if you have an unlimited data plan but we all know those don't exist in any form. 

Turning it off is easy if you can find it, go into setting, then mobile data (towards the top) then scroll all the way down to the bottom and there is it. in the example below, Wi-Fi Assist had assisted us to use 478K of mobile data whilst we were on Wifi. Whilst your in the screen and have turned off Wi-Fi Assist then its worth having a look through the apps listed to make sure you've allowed/denied mobile data as needed. 

Continue reading
  0 Comments

Copyright

© (c) 2016 GEN Partnership, E&OE

0 Comments