Archive for October 2008
FYI for those readers who may have customers or partners on Cogent and Sprint networks.
Peering War Breaks Out!: “A border war has broken out between tier-1 providers Cogent Communications (CCOI) and Sprint Nextel (S), they are no longer exchanging traffic. That means that single homed clients of Cogent cannot access data on Sprint’s network, and vice versa. The internet is, for the time being, partitioned, as one can see here. Cogent issued its [...]
In our “Why Downloads Fail” series part 1 & 2 we discussed end-user behavior and browser limitations as reasons why downloads fail. Today we’re going to try and explain one of the most enervating reasons why large downloads fail; the math. Specifically, the limits of math in download computation.
Math used in computation and file storage can limit the size of files that can downloaded successfully. Its a true as it is painful for users and service providers alike.
Modern computers use 32bit or 64bit computing architectures. The word “bit” refers to the “binary digit”, or 0s and 1s, used as the foundation for digital computing. The 32bit or 64bit labels used (carelessly) in common tech parlance today describe the memory that is accessible by CPUs (central processing units) and, just as often, describe attributes of software written to take advantage of the addressable memory made available by the CPU architecture. For example; We have 64bit CPUs from Intel and 64bit versions of Solaris 10 and RedHat Linux.
Here comes the Math part. The “bit” is the limit of the computers ability to address and store information. So, a machine based on 32bit architecture can only represent numbers for processing up to a limit of 2^32, or 4,294,967,295.
A quick glance suggests that this means we can have a 4GB limit for files used in 32bit architectures. This is partially true. However, a “number” in this context is really an integer. Integers are (we all remember from 6th grade math right?) a positive or negative whole number including zero. Thus, even though in 32bit systems we use 2^32 or 4,294,967,295 bits to play with, we need to include the representation of negative integers when doing computations. The pool of integers we get to use is −2,147,483,648 to +2,147,483,647. This is the root of the 2GB boundary limit.
The difference between the set of integers between zero and 4,294,967,294 and −2,147,483,648 to +2,147,483,647 is known as unsigned vs. signed integers. Software developers use different software languages and libraries of tools and methods, built by others, to help create the foundation for features and functions in the software we use every day. Inherent in these tools are assumptions about how computations will be made. Consequently, some of the software applications still in use today have built-in limitations on how they can read, write, and address files where the numbers (integers) used to represent the sizes are larger than the limits of 32bit math.
Think about this for a second. How long ago was it that 250MB hard drives were the biggest you could buy to put in a laptop? It wasn’t long ago that 2GB files might have been considered “way-out-there”, “over-the-horizon” type file sizes and therefore the need for developers to consider this limit were few and far-between.
Practical manifestations of this calculation limit still show up for end-users every day.
1. They can’t copy or save files over 2GB.
2. They may have a 64bit (integer pool is now 2^64) CPU but their software has been built and compiled on 32bit systems, so they bump into 2GB boundaries.
3. They work in environments where some systems can handle large files and others do not, so cross-system transfers fail without an obvious reason.
All very real, and all very frustrating. As an experiment, try talking to a customer who has paid $500 for a piece of software they need right-now, and explaining that their browser/os/file-system isn’t compatible with large files. Now try it when they just got done watching a streaming IronMan movie on the same machine. Not fun!
There are ways to work-around this limit (see Long vs Float), some obvious, and others not-so-obvious. Large files are still being downloaded, copied and transferred in great quantities. Specialized tools and architectures are making sure that you can distribute your digital assets, however large they may be, quickly and with high success rates. But we are still seeing this as one of the reasons downloads fail to complete successfully.
Sometimes downloads fail because users stop them. Sometimes downloads fail because of browser bugs. Sometimes downloads fail because the math (signed integers) was never going to let them succeed.
(acknowledgement: this a much discussed topic in tech circles, and we’ve read everything we can find on the subject. My thanks go out to everyone who has attempted to explain this issue, in any fashion, to a non-technical audience. Its not easy. Hopefully this treatment beneficially adds to the ongoing discourse).
This just hit my RSS Feed. Good timing given our earlier post on P2P
Sandvine, best known for manufacturing the hardware that slowed down BitTorrent users on Comcast, has released an Internet traffic trends report today. The report shows that, on average, P2P traffic is responsible for more than half of the upstream traffic, but mostly the report seems an attempt to sell their traffic shaping products.
Over the years, many Internet traffic reports have been published. Back in 2004, long before the BitTorrent boom had started, studies already indicated that BitTorrent was responsible for an impressive 35% of all Internet traffic.
Since then, we’ve seen a couple of dozen reports, all with a totally different outcome. Some estimate that P2P traffic represents approximately 50% of the total traffic, while others go as high as 85%, or as low as 20%. The overall consensus seems to be that there is little consensus, or is there?
We think we might have spotted a trend, not so much in the data, but in the companies that publish these reports. Most Internet traffic research is conducted by companies that offer traffic shaping and broadband management solutions. Cachelogic, Ipoque, Sandvine, they all sell (or sold) products that help ISPs to manage their traffic.
Consequently, it is not a big surprise that their presentation of the results is often a little biased. After all, it is in their best interests to overestimate the devastating effects P2P traffic has, and convince ISPs that they need to throttle these awful bandwidth hogs.
Or as Sandvine co-founder Dave Caputo puts it: ‘Bulk bandwidth applications like P2P are on all day, everyday and are unaffected by changes to network utilization. This reinforces the importance of protecting real-time applications that are sensitive to jitter and latency during times of peak usage.’
In Sandvine’s report we see that P2P represents less than a quarter of all downstream traffic, and even less during peak times. Web traffic is most dominant and online media streaming sites take up nearly 16%.
On the upstream side, P2P traffic takes up 61% of all traffic (the black makes it even more scary), followed by web-browsing, tunneling and VoIP traffic.
Interestingly, the amount of bandwidth that is transferred on the Internet has more than quadrupled since the first reports came out a few years ago, and it is likely to quadruple again in only a few years. Unlike Sandvine suggests, throttling is not the solution. Investing in the network is.
Post from: TorrentFreak
I’m hard-pressed to find a software tool that I spend more time in than a web-browser. Given that at OMS we deploy our entitlement, software license management, and distribution tools in a SaaS environment, this makes sense. But even if we didn’t use web-browsers as the means to interact with our application logic, I’d guess that more than 50% of my time on my machine is spent working inside a browser.
For the vast majority of my daily transactions, browsers function outstandingly. However, when downloading large files (large is defined as 2.0GB or greater – for today – as the definition of “large file” changes fast in this business) web-browsers are imperfect, and sometimes limiting.
For example, Windows Internet Explorer has a 2GB download limit, the details of which can be found in the MS Knowledgebase article. In our experience, few users are aware of this limitation, or the work-around which can increase the downloadable image size to 4GB. Firefox does not have this specific limitation, but is also not the “perfect” download tool for large files.
Limits in approach and technology such as the IE 2GB limit, cause downloads to fail without a readily apparent reason (from the perspective of the end-user) and unfortunately, after already spending lots of time waiting for the download to complete. This maddens and frustrates both users and the companies who pay to distribute high-value digital assets electronically.
In an acknowledgement and response to these imperfections the software community has reacted by authoring alternatives. You can easily find both commercial and free 3rd party download tools to augment or replace the on-board download manager functionality in modern web-browsers. These tools provide end-users with many more options for managing downloads, and are able to avoid or outright replace the native browser technology used in the download function. Our friends at Sun (see the blogroll on this right of this page) and Akamai have gone so far as to develop and deploy their own proprietary download manager solutions to help their end-users complete downloads on the first attempt.
If you are routinely downloading large files you should be looking to use one of these many add-ons or applications(if you aren’t already). We’ll add a list of some of the download managers we use to this post in the next few days.
Next Installment: “Why Downloads Fail Part 3: The Math”
Intersting Article from Ars on P2P use.
OMS has been following the state of P2P and P4P technology for quite some time. There are challenges to its use in enterprise environments, but if this report, and its forecast are correct, some of the barriers to a broader adoption may soon disappear.
As video distribution companies take to the Internet and try to keep costs down, they are discovering P2P’s legal uses. However it’s delivered, though, online video could force major ISPs to rethink pay-TV business models.