OMS SafeHarbor

Connecting Software and Customers

Archive for October 2008

Peering War Breaks Out!

leave a comment »

FYI for those readers who may have customers or partners on Cogent and Sprint networks.

Peering War Breaks Out!: “A border war has broken out between tier-1 providers Cogent Communications (CCOI) and Sprint Nextel (S), they are no longer exchanging traffic.  That means that single homed clients of Cogent cannot access data on Sprint’s network, and vice versa.  The internet is, for the time being, partitioned, as one can see here.  Cogent issued its […]

Written by admin

October 31, 2008 at 14:09 pm

Posted in OMS News

Why Downloads Fail – Part 3 – 32bit Math and the 2GB Boundary Limit

leave a comment »

In our “Why Downloads Fail” series part 1 & 2 we discussed end-user behavior and browser limitations as reasons why downloads fail. Today we’re going to try and explain one of the most enervating reasons why large downloads fail; the math. Specifically, the limits of math in download computation.

Math used in computation and file storage can limit the size of files that can downloaded successfully. Its a true as it is painful for users and service providers alike.boundaryeq.gif

Modern computers use 32bit or 64bit computing architectures. The word “bit” refers to the “binary digit”, or 0s and 1s, used as the foundation for digital computing. The 32bit or 64bit labels used (carelessly) in common tech parlance today describe the memory that is accessible by CPUs (central processing units) and, just as often, describe attributes of software written to take advantage of the addressable memory made available by the CPU architecture. For example; We have 64bit CPUs from Intel and 64bit versions of Solaris 10 and RedHat Linux.

Here comes the Math part. The “bit” is the limit of the computers ability to address and store information. So, a machine based on 32bit architecture can only represent numbers for processing up to a limit of 2^32, or 4,294,967,295.

A quick glance suggests that this means we can have a 4GB limit for files used in 32bit architectures. This is partially true. However, a “number” in this context is really an integer. Integers are (we all remember from 6th grade math right?) a positive or negative whole number including zero. Thus, even though in 32bit systems we use 2^32 or 4,294,967,295 bits to play with, we need to include the representation of negative integers when doing computations. The pool of integers we get to use is −2,147,483,648 to +2,147,483,647. This is the root of the 2GB boundary limit.

The difference between the set of integers between zero and 4,294,967,294 and −2,147,483,648 to +2,147,483,647 is known as unsigned vs. signed integers. Software developers use different software languages and libraries of tools and methods, built by others, to help create the foundation for features and functions in the software we use every day. Inherent in these tools are assumptions about how computations will be made. Consequently, some of the software applications still in use today have built-in limitations on how they can read, write, and address files where the numbers (integers) used to represent the sizes are larger than the limits of 32bit math.

Think about this for a second. How long ago was it that 250MB hard drives were the biggest you could buy to put in a laptop? It wasn’t long ago that 2GB files might have been considered “way-out-there”, “over-the-horizon” type file sizes and therefore the need for developers to consider this limit were few and far-between.

Practical manifestations of this calculation limit still show up for end-users every day.
1. They can’t copy or save files over 2GB.
2. They may have a 64bit (integer pool is now 2^64) CPU but their software has been built and compiled on 32bit systems, so they bump into 2GB boundaries.
3. They work in environments where some systems can handle large files and others do not, so cross-system transfers fail without an obvious reason.
All very real, and all very frustrating. As an experiment, try talking to a customer who has paid $500 for a piece of software they need right-now, and explaining that their browser/os/file-system isn’t compatible with large files. Now try it when they just got done watching a streaming IronMan movie on the same machine. Not fun!

There are ways to work-around this limit (see Long vs Float), some obvious, and others not-so-obvious. Large files are still being downloaded, copied and transferred in great quantities. Specialized tools and architectures are making sure that you can distribute your digital assets, however large they may be, quickly and with high success rates. But we are still seeing this as one of the reasons downloads fail to complete successfully.

Sometimes downloads fail because users stop them. Sometimes downloads fail because of browser bugs. Sometimes downloads fail because the math (signed integers) was never going to let them succeed.

Useful links

http://en.wikipedia.org/wiki/Integral_data_type

http://en.wikipedia.org/wiki/Signed_number_representations

http://en.wikipedia.org/wiki/Comparison_of_file_systems#Limits

http://en.wikipedia.org/wiki/Large_file_support

(acknowledgement: this a much discussed topic in tech circles, and we’ve read everything we can find on the subject. My thanks go out to everyone who has attempted to explain this issue, in any fashion, to a non-technical audience. Its not easy. Hopefully this treatment beneficially adds to the ongoing discourse).

Written by admin

October 28, 2008 at 18:35 pm

Posted in Downloads

Tagged with , ,

‘Shocking’ 61% of all Upstream Internet Traffic is P2P

with 2 comments

This just hit my RSS Feed. Good timing given our earlier post on P2P

‘Shocking’ 61% of all Upstream Internet Traffic is P2P: ”

Sandvine, best known for manufacturing the hardware that slowed down BitTorrent users on Comcast, has released an Internet traffic trends report today. The report shows that, on average, P2P traffic is responsible for more than half of the upstream traffic, but mostly the report seems an attempt to sell their traffic shaping products.

Over the years, many Internet traffic reports have been published. Back in 2004, long before the BitTorrent boom had started, studies already indicated that BitTorrent was responsible for an impressive 35% of all Internet traffic.

Since then, we’ve seen a couple of dozen reports, all with a totally different outcome. Some estimate that P2P traffic represents approximately 50% of the total traffic, while others go as high as 85%, or as low as 20%. The overall consensus seems to be that there is little consensus, or is there?

We think we might have spotted a trend, not so much in the data, but in the companies that publish these reports. Most Internet traffic research is conducted by companies that offer traffic shaping and broadband management solutions. Cachelogic, Ipoque, Sandvine, they all sell (or sold) products that help ISPs to manage their traffic.

Consequently, it is not a big surprise that their presentation of the results is often a little biased. After all, it is in their best interests to overestimate the devastating effects P2P traffic has, and convince ISPs that they need to throttle these awful bandwidth hogs.

Or as Sandvine co-founder Dave Caputo puts it: ‘Bulk bandwidth applications like P2P are on all day, everyday and are unaffected by changes to network utilization. This reinforces the importance of protecting real-time applications that are sensitive to jitter and latency during times of peak usage.’

In Sandvine’s report we see that P2P represents less than a quarter of all downstream traffic, and even less during peak times. Web traffic is most dominant and online media streaming sites take up nearly 16%.

downstream

On the upstream side, P2P traffic takes up 61% of all traffic (the black makes it even more scary), followed by web-browsing, tunneling and VoIP traffic.

upstream

Interestingly, the amount of bandwidth that is transferred on the Internet has more than quadrupled since the first reports came out a few years ago, and it is likely to quadruple again in only a few years. Unlike Sandvine suggests, throttling is not the solution. Investing in the network is.

Post from: TorrentFreak

(Via Clippings.)

Written by admin

October 23, 2008 at 14:01 pm

Posted in Downloads

Tagged with ,

Why Downloads Fail – Part 2: Browsers are Imperfect

with 2 comments

I’m hard-pressed to find a software tool that I spend more time in than a web-browser. Given that at OMS we deploy our entitlement, software license management, and distribution tools in a SaaS environment, this makes sense. But even if we didn’t use web-browsers as the means to interact with our application logic, I’d guess that more than 50% of my time on my machine is spent working inside a browser.

For the vast majority of my daily transactions, browsers function outstandingly. ibrowserlogos.jpgHowever, when downloading large files (large is defined as 2.0GB or greater – for today – as the definition of “large file” changes fast in this business) web-browsers are imperfect, and sometimes limiting.

For example, Windows Internet Explorer has a 2GB download limit, the details of which can be found in the MS Knowledgebase article. In our experience, few users are aware of this limitation, or the work-around which can increase the downloadable image size to 4GB. Firefox does not have this specific limitation, but is also not the “perfect” download tool for large files.

Limits in approach and technology such as the IE 2GB limit, cause downloads to fail without a readily apparent reason (from the perspective of the end-user) and unfortunately, after already spending lots of time waiting for the download to complete. This maddens and frustrates both users and the companies who pay to distribute high-value digital assets electronically.

In an acknowledgement and response to these imperfections the software community has reacted by authoring alternatives. You can easily find both commercial and free 3rd party download tools to augment or replace the on-board download manager functionality in modern web-browsers. These tools provide end-users with many more options for managing downloads, and are able to avoid or outright replace the native browser technology used in the download function. Our friends at Sun (see the blogroll on this right of this page) and Akamai have gone so far as to develop and deploy their own proprietary download manager solutions to help their end-users complete downloads on the first attempt.

If you are routinely downloading large files you should be looking to use one of these many add-ons or applications(if you aren’t already). We’ll add a list of some of the download managers we use to this post in the next few days.

Next Installment: “Why Downloads Fail Part 3: The Math”

Written by admin

October 23, 2008 at 13:43 pm

Forecast: Legal P2P uses growing 10x faster than illegal ones

leave a comment »

Intersting Article from Ars on P2P use.
OMS has been following the state of P2P and P4P technology for quite some time. There are challenges to its use in enterprise environments, but if this report, and its forecast are correct, some of the barriers to a broader adoption may soon disappear.

Forecast: Legal P2P uses growing 10x faster than illegal ones: ”

As video distribution companies take to the Internet and try to keep costs down, they are discovering P2P’s legal uses. However it’s delivered, though, online video could force major ISPs to rethink pay-TV business models.

Read More…

Written by admin

October 23, 2008 at 12:18 pm

Posted in Network Economics, Software ESD

Tagged with , ,

Why Downloads Fail – Part 1

with 2 comments

Downloads fail.
(repeat for emphasis)
Downloads fail.

In recent weeks we’ve had an number of conversations around this this topic with customers, partners, technology suppliers and interested individuals. As software distributors begin to move completely away from physical distribution, there is increased interest in the reliability of the internet as a distribution medium.

Over the next few weeks we will try and document the many different reasons for why downloads fail, and what if anything, can be done to help mitigate the risks of an interrupted download. Hopefully we’ll be able to compile a useful that you can use as you address the challenge of improving download completion rates, or need to explain the situation to others in your business. There will be no particular order or rank to this list for now. When we complete the list we’ll look through the available data and try rank them based on rates of occurrence.

Why Downloads Fail #1 – Users Stop the Downloadcancelall.png

Yes, Its true. Users will sometimes stop a download before it completes.

Sometimes the download takes too long and we stop. Other times we may be in a hurry, and close the laptop and leave the office. We’ve all done it at one time ore another.

The problem for service providers, including OMS SafeHarbor, is that these incomplete downloads are largely indistinguishable from other ways downloads fail. If the user closes their laptop and kills the network connection, on-purpose, we just see the disconnect. We don’t get a signal that says the user intended to disconnect.

Isolating user-behavior from legitimate errors is time-consuming, costly, and imperfect process that we go through to maintain our quality-of-service advantages and reporting relevance.

Sometimes the user decides they want to stop the download of the software update, or new release, for reasons we can only guess. We need to make sure to include this end-user behavior in all the discussions of why downloads fail.

Written by admin

October 21, 2008 at 3:09 am

RSS use peaking? I’m not so sure, but others disagree.

leave a comment »

Here is another post from Steve Rubel on RSS Feeds. Given we asked last week about which RSS readers you used I thought it was appropriate.

RSS Adoption at 11% and it May Be Peaking, Forrester Says: “

Forrester Research today published a new report on the state of RSS. In short, while there are bright spots, it does not paint the picture of a technology that’s going mainstream anytime soon.

On a positive note, the resarch entitled What’s Holding RSS Back?, says that nearly half of marketers have moved to add feeds to their web sites. Further, RSS adoption among consumers is at 11% up from just 2% of users three years ago. RSS feeds usage is more dominant among men.

Here’s the kicker, though. That might be all she wrote for RSS’ growth track.

According to the research, of the 89% of those who don’t use feeds only 17% say they’re interested in using them. In fact Forrester spends much of the report helping marketers better explain the benefits of RSS to their customers. ‘Unless marketers make a move to hook them — and try to convert

their apathetic counterparts — RSS will never be more than a niche

technology,’ the analysts (who include Jeremiah Owyang) wrote.

 

 

 

Lord knows, as someone who spends three hours a day in Google Reader, I am a giant evangelist for RSS. But I am also a realist. Feeds are way way too geeky for most and the benefit does not outweigh the learning curve. So I think RSS has peaked.

Still, while feed adoption may have crested the idea of online opt-in communications is just getting going. The Facebook newsfeed, Twitter and Friendfeed are perfect examples of opt-in vehichles that bring content you care about to you. In each case, you’re total in control. You can unsubscribe from individuals or groups and tailor the stream so that what you want finds you.

RSS is only one form of opt-in communications. The potential is bigger when you look more broadly to social networking. This larger promise still holds and as the technologies become more invisible the newsfeed could even one day subsume RSS.

Written by admin

October 21, 2008 at 2:58 am

Posted in Miscellaneous

Tagged with

Follow

Get every new post delivered to your Inbox.