Will Windows Azure succeed?

On PDC 2008 Ray Ozzie went out on a limb by saying that Windows Azure will be „setting the stage for the next 50 years of systems“. Everyone (me too) was excited about this new technology and people got inspired by Microsoft’s vision and its new cloud computing platform.

16 months later, here we are: Windows Azure is live, the platform has been consolidated (R.I.P., Live Framework…) and data centers have been built up around the world.

But is Windows Azure this game changer that Microsoft promised and which they bet on? Will Windows Azure (as product) succeed? I’m not sure, but let’s see…

First things first

My thoughts on this topic have a certain background. In 2009 my company SDX invested significant research time into the innovative areas of cloud computing in general and Windows Azure in particular. And we’re still moving forward in this area. In scope of the NewCloudApp() contest we made up a little showcase named Rating Stress Simulator which you can see on the right and which you can try out now on Windows Azure.

On the architectural side we tried to use many of the possibilities offered by Windows Azure: WebRole, WorkerRole, message queue, table storage, WCF service hosted in the WebRole, …

We gathered some experiences with Windows Azure, both on the technical and on the business side. We find the platform very promising and we believe that it’s the best cloud platform on the market from a technical point of view. It gives great flexibility for developers while still utilizing their existing technical skills with .NET, Visual Studio, SQL Server and others.

Costs always matter!

While we see a great platform we’re also unsure if we should bet on Windows Azure. The main reason for this are the fixed costs for idle hosting. This means the costs for just holding our application online and running without any user on it. For this task our simple application with two roles, a message queue and table storage (no SQL DB included!) has monthly costs of about 130€ (~177$)! The main costs come from the two running roles and I’m gonna tell you: we aren’t happy with the current situation.

And we’re not alone with our criticism. Windows Azure costs are highly debated these days and to make hosting small applications on Azure less expensive is the no. 1 request of developers who voted on the My Great Windows Azure Idea website. Several blog posts and discussions run into the same direction. When people realized that Azure computation costs are based on wall clock time and not on real CPU time they equally realized that hosting on Windows Azure is ridiculously expensive compared to other options on the market.

Let’s do another calculation example for idle costs. Let’s imagine a little start-up application with 2 roles (small instance), 1 Web DB and 1 AppFabric Service Bus connection, up and running all the time and waiting there to be used. This scenario leads to monthly costs of about 137€ (~193$), which results in costs for 1 year of 1610€ (~2270$)! Those idle costs are fixed costs for this scenario which outline the entrance barrier for just holding the application online without any traffic. Isn’t one of the basic ideas of cloud computing to keep the fixed costs low and transform them into variable costs? At least Windows Azure doesn’t follow this idea or not on a reasonable scale… Hence it isn’t attractive for start-ups and little companies which could buy and run a server on their own for these costs and get full flexibility.

Competitors

But what about the offers of other companies and their attractivity? I’ll start with shared hosting as possibility to outsource infrastructure and application hosting. Shared hosting is many times cheaper in comparison to Windows Azure. For sure Windows Azure offers additional values: deployment, management, scalability. But the point is that for most people who are interested in Azure those values don’t matter much. For most people, companies and their applications those qualities are not nearly justifying the higher costs of Azure hosting  compared to shared hosting.

But let’s come to two ‚real‘ cloud providers: Google with its AppEngine offers a certain amount of free quotas and in measuring computation costs they got the right idea. Google charges only for the actual consumption of CPU time (in comparison to the wall-clock-time-based model of Windows Azure), thus you are not charged for idle time of your processes.

Amazon with its EC2 calculates on a wall clock time basis, but here you have the full flexibility of arming your VMs with everything you want.

And then there’s Microsoft with Windows Azure which doesn’t include the benefits of both models. On the one side Windows Azure charges you on a wall clock time basis and thus you’re paid for idle time of your applications as well. On the other side Windows Azure is VM-based where every „role“ depicts an application fragment that maps 1:1 to a VM. The serious drawback here is that you are not able to host more than 1 role in a single VM instance. Thus you don’t get any of the flexibility of Amazon’s EC2 VMs.

Microsoft should revise this role/VM-based scaling model and/or the wall clock time basis. With the current model it’s not attractive for hobby developers, start-ups and smaller companies. The entrance barrier is ways too high and the scalability is too limited. Why should any small company use Windows Azure over AppEngine or Ec2? I don’t know…

But what are the implications of not attracting hobby developers, start-ups and little companies? Perhaps we could learn from the past, so let’s take a look in history…

A brief look in history

Some days ago an older colleague of mine told me from the rise of Windows and the fail of OS/2 Warp (this was before my active computer time, so it was interesting to hear this story from ancient ages…). He told me that OS/2 had striking features on the technical side in comparison with Windows, but it couldn’t win the OS war. It was beaten by Windows and died a slow and painful death…

One reason he sees for this is the poor attraction of hobby developers and the inability to build up a big developer community around OS/2. Windows came with a cheap compiler which enabled everyone to write his own Windows software. Microsoft was able to attract hobbyists which resulted in a huge software pool of shareware, freeware etc. and yielded a large developer community and knowledge base to develop applications for Windows. OS/2 Warp failed in this case: the IBM compiler was expensive (> 1000 Deutschmark, that’s about 650$ in the year 1990) and even while the platform and technical features were great, nobody was attracted to develop against it due to the high entrance barrier of development costs. OS/2 couldn’t generate a critical mass of developers, development knowledge and subsequently software as output for the users.

Hobby developers matter!

I don’t get around to see parallels between OS/2 and Windows Azure on the costs side. Microsoft should learn from history and should prevent mistakes that have been done by others before. It’s crucial to attract a broad range of developers to use Windows Azure for expressing their ideas and building the next wave of killer applications. With the current costs just very few developers will be attracted to make experiences with the platform. But that’s a big mistake!

By bringing Windows Azure to hobby developers and smaller companies, Microsoft would open heavy doors to the real business. Developers and architects would take their experiences with the platform from their personal lifes to the business and would promote Windows Azure when asked for it or when to decide for the platform of a new application. Accordingly the entrance barrier even for bigger companies would be ways lower: developers who have gained experience with the platform would be able to spread their knowledge to colleagues. This builds up a chain reaction and an exponential growing amount of developers would use the platform. It would be the best promotion for Windows Azure that Microsoft could get.

For today because of the costs not many developers will suggest Windows Azure with good conscience and that’s a pitty. Please, Microsoft: Community matters! Realize the potential of your platform and the implications that come with the amount of hobby developers who are using Windows Azure.

The need for a „Mini Azure“ offer

One suggestion follows: realize a „Mini Azure“ package as new offer on your Azure platform. Low costs (<10$ per month), few resources, no dedicated machines, weak SLAs, but online 24×7 – just being there to get people started. Remove this high entrance barrier or you will go the OS/2 way down the road. Your great platform leads you nowhere and will not succeed, if you don’t have people who use it…

Mini Azure has been suggested by Jouni Heikniemi before, so take a look at what he and others say.

Conclusion

To summarize things: if nothing changes I’m currently afraid that Microsoft fails with Windows Azure, at least I fear it will be a non-starter. And that’s a bad basis for future development. At the moment Microsoft scares people away and they will meet problems to get them back. I say it again: I think they have a very good development platform. But the best platform doesn’t lead them anywhere if it’s too expensive for people to use this platform.

kick it on DotNetKicks.com

new CloudApp(): Rating Stress Simulator

Recently, Microsoft came up with the new CloudApp() Contest. This competition encourages developers all over the world to create some applications on Microsoft’s Azure platform that make use of the cloud and emphasize the cloud computing benefits. While the U.S. winners are already chosen, the international contest will be open for community vote during 10th July and 20th July.

For this competition and as part of our overall technology partner strategy, my company SDX AG has built up a team to develop an Azure-based business application showcase. Our team developed a business scenario and has taken advantage of Windows Azure and Silverlight to realize this scenario in the cloud. I’ve had the chance to take part in the brainstorming process and joined the team for some development tasks during the last days…

The business story

Since my company has core competences in the financial sector, the business scenario targets this area as well. The application realizes a rating stress simulator for banks. What is this about? That’s the story: Financial institutes cannot provide credits freely. Each credit must be backed with a certain amount of money, depending on a calculated risk (rating for a credit user).

Functionality

Our Azure application is a simple showcase on base of this business scenario. It allows users to run a financial stress test which calculates capital buffers, that are needed to withstand selected situations (such as a recession) over some period of time. After the stress test is run, the user can view the results of the calculation visually. The included algorithms are very simple, but they verify the architectural model of our cloud application and bring it to reality.

Benefits

A bank in need for this functionality has real benefits of letting the application run in the cloud. Such a stress test is not computed continuously, but runs on a periodical cycle, e.g. weekly or monthly. Thus the processing power and storage capacity has a short peak where the server CPUs get busy, but the rest of the time they remain idle. By running the application in the cloud, a bank has no need for additional servers in its datacenter to buffer the computational peaks of the stress test. Thereby, the infrastructure and administrative costs can be clearly reduced.

Try out and vote!

Of course, you are able to run the application for yourself and try out its functionality. By this URL, you can start the Silverlight application: http://ratingsimulator.cloudapp.net.  I encourage you to read the introductory information on the first page to prepare yourself for the application and to get more background information. Then play around, create some scenarios with a certain number of credits and let the calculation run in the cloud. Afterwards you can view the stress test results in a summarizing diagram.

If you like our little application, we would be very glad if you vote for us. You can do that with the following link: [voting closed]. Thanks a lot for your support!

new Cloudapp() - Rating Stress Simulator

My colleague AJ has published a blog post about this as well: new CloudApp()

kick it on DotNetKicks.com

Thoughts on the Live Services

After meshing for some days now, I took my 8-miles-running-lap today to think about the Live stack as a whole (I love my runs, it’s the only time to think in-depth on some topics). Thus I want to step back away from the bits and bytes for a moment and review the whole Mesh and Live Services thing. So what does it bring for developers and for consumers? What experiences are changing for them? What is it all about in my eyes? Questions, to which I try to find an answer in this blog post.

What is it about

Let’s start with the general pictures again. From a logical view, the Live Services are just one of the building blocks of the Windows Azure Services platform, as shown in our overall present Windows Azure picture…

Windows Azure Overview

Furthermore, the Live Services are providing some core services on the one hand side and a bunch of Mesh Services on the other. That’s shown by the Live Services overview picture…

Live Services Overview

So we got some core services, we got the mesh and services on it and we got applications today, which are using those services: the Windows Live applications, Office Live and Live Mesh as consumer portion of the mesh and taking advantage of the Mesh Services. At the moment, those applications are relatively distinct from each other in the sense of storage, used APIs and information sharing. For example, with Windows Live comes the 25GB SkyDrive, Office Live has its own workspace and with Live Mesh you get 5GB storage today. Those are not interconnected. But there are big things going on!

From the developer’s view, in the current Live Framework CTP you already take advantage of Windows Live functionality. Thus you are able to access your Live contacts and profiles. Furthermore, you can share data from your mesh with your contacts using various roles. In my opinion, Live Mesh will become part of Windows Live on http://windows.live.com, there earning a central stage. In return, perhaps you will be able to access your calendar, mail and other services programmatically through the Live Framework? We’ll see. Bringing all together is desirable in my eyes.

Ok, let’s continue. So what? What are the core conceptions of the Live Services? For me the whole functionality can be broken down into 2 main points:

Connecting

It’s no secret, what the Live Services and especially the Mesh Services are designed for. They bring together devices, applications, data and people, overcoming traditional barriers between them. The container therefore is the Mesh. In fact there is more than one mesh. You got a mesh of your devices, which share common information. On those devices and in the cloud you can have data and applications, which work with this data or on their own (leading us to the terms of data and application mesh). In a next version of Live Mesh you’ll be able to find applications through an application catalogue and install them on your mesh. Integrating Windows Live applications this way would be nice and I’m interested to see if this will happen soon. And with your contacts you have a mesh of people, with whom you can share your information, your data and even applications.

Thus, connecting means connecting your devices, your data and your applications, but what it really is about and what we can’t see today in Live Mesh (ok, we can see a little of it, but this is only the tip of the iceberg) is the non-egocentric perspective. It’s all about community! About sharing your photos, videos, thoughts and knowledge with your friends and like-minded people. The Live Services foundation is all about this aspect of bringing people together and making „community“ easier and more fun. In the end, it’s bringing YOU with your data/applications/devices/thoughts together with the OTHERS in a way, that doesn’t limit you (in contrast to today’s Web 2.0 platforms, which are bound on one or only a few aspects… or are you using one platform that enables you to organize your digital life with all little aspects through rich applications you wrote or which have been written and shared by others?). For me, Live Services in combination with Mesh are not only services. For me it’s a platform for a huge set of new applications, new business models (putting your favorite ad-engine into your apps is not far away!) and a new way to build up your digital life, using your mesh and the applications that fit your needs. Don’t believe me? I didn’t think so, before I got my iPod Touch (Azure on my head 😉 ) and thousands of apps through the AppStore. Small, cute applications and there is no mesh and no community aspect and no sharing over devices behind (on most). But it makes fun and costs are small. Now I’m mapping this to Silverlight, Mesh, the Live Services and the application catalogue and I see a huge potential!

Synchronizing

The second big functionality aspect when talking about Mesh and the services that come with it is synchronization. This can easily be underestimated, but it implies so much. At first, let me explain this point. As consumer, you currently have the Live Mesh application built onto the Live Services stack. You are able to create folders on your Live Desktop in the cloud and share those folders over your devices. The data is automatically hold in-sync over those devices through the Live Mesh client, which looks for changes and in case of change synchronizes data in the background. That is what you see now.

What you will see in the future, when the application catalogue and applications in your Mesh have come to life: not just data, but the applications in your Mesh will be synchronized to your devices! Thus you not only have your data everywhere, but also the apps, which work with the data, including configuration settings etc.. This leads to offline-scenarios for your mesh applications, if those are relying on data in your mesh (I don’t know if it’s planned to synchronize data of your contacts as well and there are imho some issues with that… we’ll see…). That means you can work offline with your applications and if you come online again, you’ll instantly have your data spread around your devices. I’m agog if there will be an offline-capable version of Windows Live Writer, with which you will be able to write your blog posts offline and then it publishes/synchronizes automatically, when you come online. This would need more integration of Windows Live, but as stated above, this is not far away. Ok, this offline-capability is nice, if you want to do some work where you don’t have Internet (for example in an airplane (standard-example through the PDC and later on ^^), in the train or for me at my parents, because they don’t have Internet (well, they are in fact in stone age and eating with fingers and drawing with chalk on stones etc. … sorry, I’m just joking 😉 )). But while Internet access is widely spread, this has not a heavy impact compared with the synchronization (and everywhere/everytime access for you) itself.

Instead, speed of Internet connection is a important point. For loading big chunks of information from your mesh you need a high-speed connection and even in this case you often have to wait some time, before your data is ready to work with. This drops in case you are working with your mesh data. This data is kept in sync with your computer and thereby you are able to work on your local copy of that data, leading you to instantly accessing that data without downloading it before. If you change the data, the Live Operating Environment (LOE) on your local device notifies these changes and automatically performs the synchronization process to other instances of the LOE (other devices and/or cloud).

Automatic synchronization makes a big difference for the developer, too. Instead of worrying about it, he can rely on this functionality. He can connect to the local LOE and work with the published resources. He doesn’t have to worry about network connections, data retrieval or synchronization – the Live Services are dealing with that. This makes a real difference! Let’s get Azure in mind: the developer can concentrate on the core functionality of his application, thus saving time or spending it to have fun and inspiration while realizing his ideas. Attracting the developer community is in my mind the cornerstone for the success of the Live Services.

What are the benefits

After making clear the core functionality of the Live Services, I want to emphasize (from my point of view) the benefits for consumers as well as for developers, who are programming against the Live Framework.

Consumers

  • Removing barriers: The interface barriers between your devices, data, applications and the people/contacts around you are removed. Thus you are able to interact with those from a central point.
  • Synchronizing: Data and applications everytime and everywhere on your devices! That is what synchronization with your mesh is about. Instead of just synchronizing data, you have the ability to synchronize your mesh applications as well.
  • Offline scenarios: While synchronization of applications is enabled, you can work offline with your applications (in your mesh or only mesh-enabled) which rely on data in your mesh. Instantly when you come online again, the changes will be synchronized automatically to your mesh in the background.
  • Community: Interacting with people, sharing data and taking care of your friends and contacts is made easy with Live Services and Mesh. Have access to your contacts, manage your online profiles and share information, knowledge, multimedia contents and general data as well as applications with your friends, granting them specific rights on your mesh objects. Furthermore, this brings collaboration to life.
  • Applications: Have access to many little applications through your mesh and the central application catalogue. Create instances of those applications on your mesh, synchronize these apps to your machine’s desktops or share them with friends.
  • Manage your digital life: With your data in the mesh and on all of your devices and with applications working on your mesh data, you are able to manage your digital life from a central point. Install applications that fit your needs for every purpose you want and have those applications working on all of your devices. Use services from Windows Live as well and share your life with your friends and families.

Developers

  • Easy programming: Programming with the Live Framework is made easy. You can access your data through central collections and don’t have to worry about common challenging programming tasks. You can query your data resources consistently and have the ability to use LINQ in the .NET libraries. CRUD processes on your data are no problem through a consistent resource model.
  • Consistent access: The Live Framework allows you to consistently access your mesh data, your profiles, contacts and later on other Windows Live services and Live entities with associated relationships.
  • Focusing: Many difficult tasks like network access, authentication, community connection and synchronization are abstracted by the Live Services, Live Framework as programming interface and the Live Operating Environment as a device endpoint for your mesh. Thereby you are able to focus your attention on the important things and the consumer experience of your application.
  • Cloud/client symmetric programming model: It doesn’t matter, which instance of the Live Operating Environment you are accessing. Whether it’s in the cloud or on somebody’s device – you will be able to code against it in a consistent way, thus allowing you easily to realize scenarios of your choice.
  • Business model: You’ll be able to include your favorite ad-engine in your apps to make money with your applications, as those are spread throughout the community.
  • Open access: Access to every instance of the Live Operating Environment (single devices or the cloud) and therefore to your mesh is possible with the programming language and operating system of your choice using a RESTful HTTP interface, that relies on open specifications. You can choose your favorite wire format, too (POX, ATOM, RSS, JSON), because data in the Live Framework is entity/collection-based, has no associated behavior and thereby can easily be transformed into an feed-based model. Furthermore, libraries for many languages will not be far away…
  • Scripting: The Live Operating Environment comes with the ability to compile and run so-called resource scripts. Those are similar to stored procedures in the database world. You can run a resource script on your own or let it run as trigger on data manipulation processes.
  • Mesh-enabling: No matter, if you build up a new application or extend an existing one: making use of mesh-functionality (thus mesh-enabling an application) is no problem and can easily be done with existing libraries.

CTP limitations

While the potential of the Live Services and Live Framework is big in my eyes, there are some limitations with the current Live Framework CTP, which will hopefully be solved in coming releases. The first came up with the installation of the Live Framework client, that is responsible for background synchronization of your mesh objects. At first it was not possible to install the Live Framework client in parallel to an existing Live Mesh client installation. I worked out a solution for that which works well for me, but I don’t know if there are some issues with that. The fact that there’s a separate mesh for the Live Framework instead of using the consumer mesh is a limitation as well, because you cannot access the same data as in your „productive“ mesh.

Another limitation comes with the functionality of the Live Framework client. At the moment, data synchronization is not possible, which means that files and folders from your mesh are not synchronized with devices running the Live Framework client. The same is true for your profiles and contacts, meaning that you have to connect to the cloud LOE to get access for those. Other limitations are, that you can’t get information about the status of a device and no remote access is possible (but this doesn’t count too much).

When programming against the Live Framework CTP, you are currently not able to traverse hierarchical data directly, thus you have to implement it on your own. Further „comfort functionality“ when accessing media resources is missing, too, but that’s no big deal.

Overall, with Live Services, Mesh, Windows Live and the Live Framework the fundamental stones are set for a new platform of rich applications and experiences for both consumers and developers. We’ll see how the community is reacting on this and where the whole thing will be in two years. Until then: what do YOU think?

kick it on DotNetKicks.com