Quantcast
Channel: Azure
Viewing all 23 articles
Browse latest View live

Composite Applications Roadshow – Dallas & Houston

0
0

Microsoft is hosting a two events in Dallas and Houston on 12/8 and 12/9 covering composite application scenarios, governance, composite application roadmap imageand upgrading to BizTalk Server 2010.

I just got done presenting the keynote, “Building Composite Application Services with AppFabric” at the Microsoft Las Colinas Campus in Dallas and will be presenting once again at the Houston Microsoft Campus tomorrow (12/9), so if you are in the area but missed today’s event, please feel free to register and attend: https://msevents.microsoft.com/cui/EventDetail.aspx?culture=en-US&EventID=1032469800&IO=ycqB%2bGJQr78fJBMJTye1oA%3d%3d 

In this session, I cover how to enable hybrid composition scenarios leveraging AppFabric, Azure and BizTalk Server 2010 by looking at a hybrid travel & hospitality scenario that manages reservation requests on-premise by composing services hosted in an Azure Web Role and a BizTalk Server 2010 Orchestration hosted out in the edge (such as a restaurant location itself) which receives new reservation manifests and reserves a table. The on-premise application is implemented with WF 4 as a Workflow Service and is hosted in Server AppFabric and consumes a WCF 4 service hosted in an Azure Web Role which in turn consumes the BizTalk Orchestration using AppFabric Connect for Web Services.

Below is the agenda for both events and I am also attaching the deck from my talk for any attendees or others who would like to reference it.

09:00 – 10:00  Composite Application (Windows AppFabric, Azure AppFabric, BizTalk 2010) 

10:00 – 11:00  Accelerate Adoption of SOA – Tools, Best Practices, Governance

11:00 – 12:00  BizTalk 2010 and Beyond Roadmap

12:00 – 01:00  (Lunch) Upgrading BizTalk Server 2006 R2 / BizTalk 2009 to BizTalk 2010


Pat Filoteo and Fellow MVPs on AppFabric & More

0
0

I was fortunate to participate in a one hour discussion with Microsoft architect Pat Filoteo and a few fellow MVPs including my Neudesic colleague David Pallmann on Windows Azure and AppFabric a few weeks ago when I was on campus for PDC 10.

The discussion was filmed and posted by the MVP team, uncut in its entirety.

In the segment below, we talk about the potential for AppFabric to transform how we think about composite applications and how important hybrid composition will be to the enterprise as it identifies the chemistry and seeks the right psychology for leveraging the cloud in a manner that leads to increased effectiveness while preserving and more importantly extending the reach of on-premise assets to the cloud and beyond.

 

If you are interested in watching all segments, please check out the MVP Award Blog: http://blogs.msdn.com/b/mvpawardprogram/archive/2010/12/08/windows-azure-q-amp-a-discussion-with-microsoft-azure-architect.aspx

DB Tech Con 2011

0
0

dbtechconbannerad

I had the privilege of recording 3 sessions in the SSWUG studio this week for the upcoming DB Tech Con conference on April 20-22. This is the largest online conference in IT the world, with speakers covering topics ranging from .NET, SQL Server and cloud.

The focus of my track is all about hybrid solutions in the enterprise and how you can take advantage of AppFabric and BizTalk as a comprehensive platform for building on-premise solutions that take advantage of the cloud in a pragmatic way.

You can find the full session schedule by clicking here and below is an abstract of my sessions that will air starting April 20th:

Building Occasionally Connected Hybrid Applications

Keeping applications and devices synchronized with a company’s back office is a common challenge. Retail, transportation and oil and gas are just a few industries that rely on the ability of software solution deployed outside of the data center to be respond to external events that may occur virtually anywhere. As organizations move certain assets to the cloud, occasionally connected applications are becoming the norm, creating a new breed of hybrid applications. In this session, learn how to implement a sophisticated pattern for enabling push synchronization across your applications and services using Microsoft Sync Framework, SQL Azure and WCF 4.

Building Composite Enterprise Hybrid Services with AppFabric and BizTalk 2010

AppFabric and BizTalk 2010 provide a comprehensive middleware platform for developing, deploying, and managing composite enterprise capabilities both on-premise and in the cloud. Come learn how AppFabric and BizTalk Server can benefit your approach to building and supporting application services at enterprise scale while transcending traditional trust boundaries and enabling the hybrid enterprise.

Hosting WF Services in Windows Azure, Today & Tomorrow

Workflow Services bring many benefits that help you build modern, responsive composite applications. Learn best practice for building and hosting Workflow Services on-premise as well as how you can take advantage of Windows Azure for hosting your workflow services today along with improvements coming to Windows Azure which will make hosting your workflow services in Azure more compelling than ever.

The good folks at SSWUG are offering a $30 discount code on registration for anyone who provides a discount code of SP11DBTechRG during registration. If you’ve already registered, you can take advantage of this discount by updating your registration and providing the code.

If you are planning on attending, drop me a line on twitter and be sure to say hi in the chat room when my sessions air.

A Middle-Tier Guy’s Take on HTML 5

0
0

(My) Early Beginnings

I started my career in software development as a very junior web developer in 1999. I taught myself HTML, VBScript and JavaScript. The browser wars between Microsoft and Netscape were raging. I still remember how exciting my first Classic ASP page was. The fact that I was able to connect to an Access or SQL Server 7 database with just a few lines of code inside my markup was incredible at the time. I consumed new recipes from “4 Guys from Rolla” with relish and kept Scott Mitchell’s titles like “Teach Yourself Classic ASP 3.0 in 21 Days” on my desk at all times (it’s still on my bookshelf which itself has become a sort of Smithsonian for software development over the last decade). Even more exciting was that fact that it seemed like the beginning of a new era in which I could relegate JavaScript as a necessary evil for handling tricks like focus and validation on the UI.

At the time, I was doing some really fun and interesting work as a young Padawan for the credit card division inside the #1 Visa issuer in the country (we didn’t have a fancy name for it other than “reports” back then but we were doing very early work on what would become “BI”). By rendering data-driven operational reports dynamically on the browser, we had revolutionized how metrics like Occupancy, Average Handle Time, and Multiple-Call-Rate were disseminated within the bank ushering in a new era of productivity, transparency and accountability for everyone from agent to VP. Through this experience, we built a center of excellence which served as a benchmark for other call centers to follow (in fact, we had the likes of Amex come in and review how we did it). Sure, we had displaced an army of Excel Macro and Access developers, but such was the price of progress.  As this little rogue IT shop basking in our success, I remember a number of “true programmer” personas in the “Real IT” group that tended to undermine what was happening. These were programmers who came from C++ or Java (and whose managers felt threatened by what we were able to do with such few resources) and mostly thumbed their nose at things like lack of strong typing, OO, etc. They looked at JavaScript as just a tool for dirty hippie web masters (remember that term) and VBScript as as something OK for administrators to use to walk AD hives, but being inferior for lacking OO and being handicapped by things like variants. Despite our incredibly visible success (at the CEO level and above) by applying these scripting technologies, I was hungry to see the forest for the trees and experimented a bit with COM and COM+, learning how to encapsulate business logic in components and delighting in being able to wire up my COM libraries with Classic ASP even though with the exception of my manager and mentor at the time, no one else even had the tools to debug my components.

The Server Era

Then, this thing called Windows DNA came along which promised to marry Windows, ActiveX and Internet into one big cluster… well you know. Fortunately, it’s fate was short-lived, but I remember attending a couple MSDN events where it seemed like concurrency and single threaded apartments would become mainstream topics on the web. Maybe us script kiddies would earn some respect after all? Then, just like that, this new, new thing called .NET happened. All of a sudden, just like that, I had a ton to learn. All my Classic ASP and JavaScript skills were superseded by Web Forms. I still remember stepping through every line of code in the I Buy Spy reference app and being completely blow away. ASP.NET offered something that was OO, strongly typed, and would even render JavaScript for you. Cumbersome JavaScript validation was replaced by server-side templates and controls. Form fields magically remembered their values across post backs. And, as I learned, WebForms offered a better separation of concerns with a nice, clean code-behind model that I would later leverage to introduce patterns like MVP, MVC Page Controller and Front Controller. I built a nice CMS portal for a multi-national bank (which according to Forbes Magazine is the largest public trading company in the world today) with these new skills and I hear it is still running today. Life was good. This was the age of the web server, and the only major argument within the Microsoft web community at that time was “C# or VB.NET”?

At this point in my career, I’d grown a bit bored with web development. I felt like I’d accomplished everything I wanted to with ASP.NET, and the roles I found myself in started dealing with problems at a more holistic, program level which would involve a handful of web apps and coordination between them and new and existing enterprise resources. I discovered Web Services, and the problems I was now trying to solve led to the gradual gravitation to enterprise architecture and middleware and before I knew it, I was hooked. Admittedly, it was a great time to make the shift. SOA was king. COM+ had grown up with support for Enterprise Services in .NET and in parallel, this amazing new messaging framework codenamed “Indigo” was in development that would provide a black belt for hungry ninjas like me who wanted to take over the world with SOA. When it came to Indigo, there were two types of members in the community: Those on this inside, and the rest of the world. I was very much part of the rest of the world, but I consumed every bit of content I could get my hands on from folks like Don Box, Juval Lowy, Wenlong Dong, Michele Bustamante and Dr. Nick.

Around the same time, a major re-engineering of a product called BizTalk Server was nearing release which took full advantage of the .NET Framework. My employer then, a mid-sized auto retail and finance company was one of the first BizTalk Server 2004 customers in Phoenix. For a fledging enterprise integration architect, this was an awesome opportunity. I learned a ton from my friend Todd Sussman, Brian Loesgen and Adam Smith, the latter of the two I wouldn’t meet in person for a few years, but I had read “BizTalk Server 2004 Unleashed” and “The Blogger’s Guide to BizTalk” from cover to cover more than once. Even better was that I was leading the development of our first SOA and 802.11 enterprise mobility project. We had decided to build the mobile apps- which were a superset of a desktop control center- with ASP.NET. Users would hit the same URL whether they were on the desktop or in the field with their device, and the right screen would render. All of our business logic was wrapped in an ASMX façade which then communicated with our BizTalk orchestrations. With my first real enterprise program under my belt, and WCF nearing GA, I decided that this was what I wanted to do when I grew up, or at least for the next 5 years.

Along with WCF, WPF was nearing release. WPF offered a completely different paradigm on which to build traditional Windows apps. Support for rich media like video and sound, flat controls, new gradients all with an incredibly “webby” DHTML-looking design aesthetic. At the time, I remember introspecting that if presentation technologies like this were successful at winning over users, then one day, users won’t care if they are using a browser or an OS to interact with software. What I didn’t realize was to what extent everyone’s cheese was about to be moved. Here we had two tremendously powerful additions to the .NET Framework, poised to revolutionize how we write software from a presentation and back-end perspective, and yet, something subtle was happening that was bigger than Microsoft, bigger than the marvel that is .NET.

The Web Reborn

Content building SOA solutions in the walled gardens of my employers and clients, almost overnight, I remember when web developers started insisting on me starting to expose JSON endpoints on my services. Apparently, while I was in messaging la la land, users had grown tired of refreshing their browsers and posting back to the server every time they submitted a form. Turns out, I too was one of them! Users were demanding not just a dynamic web experience, but one that was interactive, and felt more like a rich client (a great example at this time was Outlook Web Access). But if you wanted a rich experience, isn’t that why you stuck to the desktop and used WPF? Following the promulgation of XML as the second coming, wasn’t JSON nothing more than an esoteric relic of JavaScript? AJAX had arrived. What followed was a complete disruption of the seeming balance between intended purpose that would shift the pendulum once again to the web.

This was about the same time that Microsoft had shared its vision for Software + Services, first unveiled at Mix 07. On the outside, the Mix conference was all about the web, but the reality was that this was also part of a pretty massive campaign to court and win over designers from Adobe/Macromedia to a new technology: WPF/Everywhere, aka Silverlight. While many had been signaling WPF as the return of the smart client, users were accepting an alternate, even degraded user experience in exchange for interop. Silverlight offered a superset of WPF capabilities, delivering a somewhat equally productive design time and development experience as .NET and thus began penetrating Apple and Linux via the ubiquity of the browser. This was the first time I saw Microsoft really understanding that it wasn’t only about .NET and Windows anymore. This was evidenced by prominent PMs on stage demonstrating Silverlight apps on Macs, and running Linux distros on VMs to show that you could write the app once, and run it (almost) everywhere. A number of incredible apps were released on Silverlight including examples like Netflix and Hard Rock Café Memorabilia which were each both a sign of the times and a hint at what was to come.

To be sure, this had indeed become a software + services world, and for a while Silverlight looked promising despite the tremendous market footprint that Flash had and continues to have. I remember writing about how remarkable Silverlight was and what a game changer it could be. But then, an interesting thing happened. JavaScript and REST started appearing more and more in web apps, particularly in the consumer space. At first, despite the popularity of Fielding’s paper, REST seemed like a fringe thing, and in many ways a step backward. Here we had a tremendously powerful consortium of standards built around SOAP representing the intersection of the very few things big players like Microsoft, IBM, Sun and Oracle agreed on. What’s more, it seemed that Microsoft had timed this SOAP bubble (no pun intended) perfectly, with its shiny, new, equally efficacious messaging stack called WCF which was, and still today remains unrivaled by other platform vendors. In addition to HTTP, WCF brought TCP, MSMQ and IPC to the enterprise, offering (proprietary) binary encoding and MTOM for optimizing message exchanges. The programming model had (and continues to have) a learning curve over ASMX, but once you got over the hump, you were on a high summit and could see the world for miles around from this new vantage point. So, why in the world would anyone want to go back to using HTTP POST and POX? How could it be that the world was settling for REST and JavaScript?

Simple. The world (and the internet) was changing. A gradual, yet viral shift was taking place fueled by the success of Ruby on Rails and PHP which built the foundation for what is today known as Web 2.0. All of a sudden, anyone with a laptop and an internet connection could download a few packages and get an app up and running in no time. The barrier to entry was financially negligible and because these languages fully embraced HTTP, a tremendous community was born that was as smart as they were entrepreneurial. Interop and reuse were mere side-effects that led to tremendous adoption by everyone with a browser. Fully embracing JavaScript with their productivity boosting libraries that took the sting out of writing JavaScript, similar approaches to packaging robust functionality into libraries such as JQuery followed. In addition, while at first, Ruby on Rails embraced SOAP, it was later replaced with REST. Ask your mother or grandmother what Twitter or Groupon is and you’ll have the answer as to why REST and JavaScript have persevered.

In response to all of this, WCF added support for JSON first, followed by REST, and by the release of .NET 4, both were in the box. In addition, WCF RIA Services was introduced, adding an easy button for integrating with Silverlight clients with the stack, and WCF Data Services provided a REST-friendly approach to managing CRUD operations using ATOM as the message contract. The success of the ASP.NET MVC framework, which has all but subsumed its older, less cool ASP.NET Web Forms sibling, is further evidence of the developer and user community embracing the browser as a conduit for interoperability on the client.

The Mobility Wars

Even amidst the mobile revolution, which has been largely built on the ubiquity of broadband connections and increasingly capable handheld devices, proprietary platforms have emerged which in many ways are more restrictive, and costly than any other platform. Want to build apps for iOs? Learn Objective C. Android? Got Java? Windows Phone? .NET. Even though they all sit on similar devices and depend on the same infrastructure for messaging (the internet), apps are hardly interoperable with one another. I am sure you know at least two people that carry multiple headsets with them for this very reason, and tablets, sure to be the next wave of mobile innovation suffer from the same dilemma (if you ask me why HP abandoned WebOS, I think it has more to do with the writing on the wall regarding HTML5 than anything else, but more on that shortly). At first, this dilemma seems somewhat benign, perhaps only affecting developers. The truth is it affects everyone. Talk to any iOS developer (that’s not a complete Apple zealot) and they’ll tell you that Objective C isn’t the most productive language to write apps with. Aside from Java not being too sexy these days, I don’t hear many Android (or WP7) users raving about the selection of apps in Android Market. Same goes for WP7 and Windows Marketplace- I remember how long I had to wait to get Angry Birds on my Windows Phone 7! But the most salient example I can think of is the fact that after a decade of browser wars, tremendous innovation on the client and the server, I still can’t play Flash videos on my iPad or WP7. My iPad refuses to run Silverlight apps, even though its browser on the desktop is fully capable of doing so. This is a situation that is just plain broken, and it isn’t just me that feels this way…

The World Wants Native Interoperability on the Client, and Today, the Answer is HTML 5.

Like it or not, HTTP has become the ubiquitous interface for both the client and its conduit to the backend. This incredibly simple protocol has had more influence on software over the last decade than any other technology, completely reshaping the strategies of the biggest players. I don’t have to tell you that without HTTP, you don’t have cloud computing. For example, who would have thought that Amazon, after pioneering e-commerce would get into the PaaS business by being the first to truly innovate in commercial cloud computing at scale? Who would think that Microsoft would completely reinvent itself on Windows Azure and invest as deeply in REST as it has, not only with standards and technologies like OData and WCF Data Services, but also in exposing their incredibly rich and powerful Azure APIs as REST heads? Again, the answer is simple. HTTP has become the lingua franca of the interconnected world and the disruption started with the first packet in the early 60s.

Just as SOAP was developed to aid in interop between vendor platforms, banks and partners, REST has increased the native interoperability of applications on the web. Hold your rotten tomatoes, but I am afraid that so too is the fate of iOS, Android, WP7, WPF, ASP.NET and Silverlight. Are they going away tomorrow, next year or 5 years from now? Nope. SOAP still has a very important place in back-end systems and I don’t mean just for legacy applications. When you you want to work with contracts and interfaces (very important when designing critical message exchanges for business processes), need support for heavy lifting such as distributed transactions, reliable messaging, multiple transports and the like, SOAP is your tool. Case in point as I mentioned briefly above is Windows Azure. While the investment in REST has been significant, these REST endpoints are merely designed for optimizing interop allowing any client or platform to take advantage of the services offered by Microsoft’s cloud. Want to start or stop a compute instance? Need to write a file to blob storage, retrieve an entity from table storage or publish a message for hundreds of subscribers to consume over the AppFabric Service Bus? There’s a REST API for that. While adoption is key to the success of any product or platform, the API, and thus REST is not the end itself but merely a means to the end, and as such is only the tip of the iceberg. Below the water’s surface, there is, and will continue to be a ton of SOAP and .NET.

The same is happening on the client with HTML5. While Silverlight and ASP.NET MVC were a step in the right direction and aren’t going to just vanish tomorrow, HTML5 offers true interop at the native (browser) level, and since native interop is what the world wants, it will win, at least for now. I say at least for now because as tempting as it is to chock this up to just another trend, unlike the crusty programmer personas I mentioned when I started this stream of consciousness that has become a rather long post (thanks for staying with me this far, btw), I’ve been doing this long enough to have seen software reinvent itself a few times now. I’ve learned that rather than cry over spilled milk, it is important to embrace change and this means you have to expect and be prepared for anything. HTML 5 could fail, and companies that have already invested significantly in ASP.NET, Silverlight, Flash and one (or several) mobile platforms aren’t going to just jump in right away, but they are going to watch very, very carefully. If I am building a web app or a rich client app today from scratch though, I’m going to think very, very hard before I decide to do so in anything but HTML 5.

Who Moved my Cheese?

Who would have thought that Microsoft, with an incredibly lucrative productivity, OS, server and tools business would bet the farm on Windows Azure? As innovative as I think Microsoft’s (PaaS) cloud story is, in many ways, it is the software giant’s response to its cheese being moved by the web. And make no mistake, it is a massive bet. Initial buzz around Windows 8 has so far been met with both positive and quite negative feedback after the revelation that Windows 8 will make HTML 5 a first class citizen on the desktop and the tablet. Viewed simplistically, the seams between client and server/backend are exposed with Windows 8 and Microsoft Azure respectively. At first, this seems quite alarming (Joel Spolsky saw this coming over 8 years ago), but if you think about it, it makes perfect sense. If the client is moving to the browser, the value proposition of a beefy desktop or a rack of servers in an opaque data center is diminished significantly. However, all that data, records, images, videos, files still have to be stored and served up some where, and that somewhere needs to be natively interoperable with the the client at the iceberg and get the heavy lifting done below the water’s surface. The need for middleware- integration between the client, that somewhere and its data, applications and systems has never been greater.

Even though I’ve joked to friends that stayed on the front end that I didn’t miss anything by skipping WPF and Silverlight because we’re back to where I first began with HTML and JavaScript, the reality is that the last decade has been incredibly important in reinforcing that innovation is bigger than any platform vendor or standards body because unlike them, it is you and me that determine the fate of technology, and for that we should all be proud.

So, What Now?

I provide no value without designing distributed solutions that can be consumed by the client applications and automate the business processes they serve. So just as before, its time to buckle down once again and learn the client technologies that one of my primary customers- the UI developer- will soon be using. First in line is Mango. Next is HTML 5. And who knows, after specializing in integration for the last 5 years, I just might start generalizing a bit and get back into web development again.

See you at the other end of the wire.

Azure Service Bus Connect EAI and EDI “Integration Services” CTP

0
0

I am thrilled to share in the announcement that the first public CTP of Azure Service Bus Integration Services is now LIVE at http://portal.appfabriclabs.com.

The focus of this release is to enable you to build hybrid composite solutions that span on-premise investments such as Microsoft SQL Server, Oracle Database, SAP, Siebel eBusiness Applications, Oracle E-Business Suite, allowing you to compose these mission critical systems with applications, assets and workloads that you have deployed to Windows Azure, enabling first-class hybrid integration across traditional network and trust boundaries.

In a web to web world, many of the frictions addressed in these capabilities still exist, albeit to a smaller degree. The reality is that as the web and cloud computing continue to gain momentum, investments on-premise are, and will continue to be critical to realizing the full spectrum of benefits that cloud computing provides both in the short and long term.

So, what’s in this CTP?

Azure Service Bus Connect provides a new server explorer experience for LOB integration exposing a management head that can be accessed on-prem via Server Explorer or PowerShell to create, update, delete or retrieve information from LOB targets. This provides a robust extension of the Azure Service Bus relay endpoint concept, which acts a LOB conduit (LobTarget, LobRelay) for bridging these assets by extending the WCF LOB Adapters that ship with BizTalk Server 2010. The beauty of this approach is that you can leverage the LOB Adapters using BizTalk as a host, or, for a lighter weight way approach, use IIS/Windows Server AppFabric to compose business operations on-premise and beyond.

In addition, support for messaging between trading partners across traditional trust boundaries in business-to-business (B2B) scenarios using is EDI is also provided in this preview, including AS2 protocol support with X12 chaining for send and receive pipelines, FTP as transport for X12, agreement templates, partners view with profiles per partner, resources view, and an intuitive, metro style EDI PortalTransforms Project Design Surface.

Just as with on-premise integration, friction always exists when integrating different assets which may exist on different platforms, implement different standards and at a minimum have different representations of common entities that are part of your composite solution’s domain. What is needed is a mediation broker that can be leveraged at internet-scale, and apply message and protocol transformations across disparate parties and this is exactly what the Transforms capability provides. Taking an approach that will be immediately familiar to the BizTalk developer, a familiar mapper-like experience is provided within Visual Studio for interactively mapping message elements and applying additional processing logic via operations (functoids).

In addition, XML Bridges which include the XML One-Way Bridge and XML Request-Reply Bridge are an extension to the Azure Service Bus which supports critical patterns such as protocol bridging, routing actions, external data lookup for message enrichment and support for both WS-I and REST endpoints and any combination thereof.

As shown below in the MSDN documentation, “bridges are composed of stages and activities where each stage is a message processing unit in itself. Each stage of a bridge is atomic, which means either a message completes a stage or not. A stage can be turned on or off, indicating whether to process a message or simply let it pass through”.

Stages of a bridge

Taking a familiar VETR approach to validate, extract, transform and route messages from one party to another, along with the ability to enrich messages by composing other endpoint in-flight (supported protocols include HTTP, WS-HTTP and Basic HTTP, HTTP Relay Endpoint, Service Bus Queues/Topics and any other XML bridge) the Bridge is a very important capability and brings very robust capabilities for extending Azure Service Bus as a key messaging broker across integration disciplines.

In reality, these patterns have no more to do with EAI than with traditional, contemporary service composition and become necessary once you move from a point-to-point approach and need to elegantly manage integration and composition across assets. As such, this capability acts as a bridge to Azure Service Bus that is very powerful in and of itself, even in non-EAI/EDI scenarios where endpoints can be virtualized increasing decoupling between parties (clients/services). In addition, this capability further enriches what is possible when using the BrokeredMessage property construct as a potential poor-man’s routing mechanism.

In closing, the need to address the impedance mismatch that exists between disparate applications that must communicate with each other is a friction that will continue to exist for many years to come, and while traditionally, many of these problems have been solved by expensive, big iron middleware servers, this is changing.

As with most technologies, often new possibilities are unlocked that are residual side-effects of something bigger, and this is certainly the case with how both innovativeand strategic Azure Service Bus is to Microsoft’s PaaS strategy. Azure Service Bus continues to serve as a great example of a welcomed shift to a lightweight capability-based, platform-oriented approach to solving tough distributed messaging/integration problems while honoring the existing investments that organizations have made and benefiting from a common platform approach which is extremely unique in the market. And while this shift will take some time, in the long-run enterprises of all shapes and sizes only stand to benefit.

To get started, download the SDK & samples from http://go.microsoft.com/fwlink/?LinkID=184288 and the tutorial & documentation from http://go.microsoft.com/fwlink/?LinkID=235197 and watch this and the Windows Azure blog for more details coming soon.

Happy Messaging!

NuCon 2012–Feb 16th, Irvine, CA

0
0

I’d like to pass on some details regarding an event I will be speaking on in Irvine, CA on February 16th.

NU_logoNuCon is a one day conference put on by my employer, Neudesic that features talks and content from fellow Neudesic colleagues like David Pallmann, Ted Neward and Simon Guest, just to name a few. image

As Irvine is Neudesic’s headquarters, the event provides a great opportunity to gain insight into the future of technology as seen by my fellow colleagues as well as providing pragmatic guidance that you can put to use the following day while networking with other Neudesic customers,  executive management, partners and thought leaders to help guide your strategy on making the most of the tremendous opportunities that the Microsoft platform and Neudesic products have to offer.

In my talk, Hybrid Composition on the Microsoft Application Integration platform, I’ll share how organizations of all shapes and sizes can benefit from the improvement, automation and streamlining of their business operations through hybrid composition.

Abstract image

In today’s technology landscape, exposing key functional areas as traditional services or other means has become the norm for achieving agility and is a requirement for taking advantage of the dramatic improvements that modern middleware capabilities both on-premise and in the cloud provide.

As organizations adapt to this new hybrid model, a shift from a homogenous, single product, big iron approach to heterogeneous, best in class, capability-driven model is necessary for realizing the benefits of service-orientation and enabling the composition of these services on-premise, in the cloud and behind the firewall without making big spending commitments on a product that may only meet some of these needs.

The Microsoft platform offers a number of capabilities for achieving these goals across common Hosting, Workflow, Rules, EAI and Messaging workloads that allow you to choose the right capabilities for delivering your intended business outcomes.

BizTalk Server 2010 and Windows Server AppFabric 1.1 provide a comprehensive middleware platform for developing, deploying, and managing composite enterprise capabilities on-premise and Windows Azure Service Bus and Access Control Service allow you to extend your investments beyond traditional trust and network boundaries making the cloud and other partner/vendor endpoints merely an extension of your enterprise.

Come learn how Windows Server AppFabric, WCF, WF Services, BizTalk Server and Windows Azure can benefit your approach to building and supporting application services at enterprise scale while transcending traditional trust boundaries and enabling the hybrid enterprise.

To give you an idea of the breadth and depth of the sessions, in my talk, I’ll be talking about and showing live demos of the latest capabilities that enable you to build hybrid composite solutions to drive differentiation and innovation within your organization:

Windows Server AppFabric 1.1 Caching (On-Prem) Featuring:

  • AppFabric distributed caching including implementing the Cache-Aside caching pattern and Read-Through caching, new in AppFabric 1.1 

WF 4 Workflow Services (On-Prem) Featuring:

  • State Machine Activity, new in .NET 4.1 and .NET 4.5 
  • AppFabric Connect BizTalk Mapper for WF 4 in AppFabric Connect
  • Long-running workflows
  • Workflow Correlation
  • Composition with WCF services in Windows Azure

Windows Server AppFabric Deployment (On-Prem) Featuring:

  • Easy deployment with Microsoft Web Deploy
  • Windows Server AppFabric Configuration Experience

WCF hosting in Windows Azure Web Roles (Cloud) Featuring:

  • Azure Web Role hosting
  • Azure Service Bus Topic client

Azure Service Bus Brokered Messaging (Hybrid) Featuring:

  • Brokered messaging from Azure to on-premise custom applications behind the firewall
  • Topics and Subscriptions

BizTalk Server 2010 Orchestration & Messaging (On-Prem) Featuring:

  • Custom WCF Adapter for consuming messages off an Azure Service Bus Topic
  • Support for custom WCF behaviors
  • Support for hybrid ERP integration such as Dynamics CRM or SAP

So, if you are interested in attending, please consider yourself invited! Click on the links in the invitation below to register (save $100 if you register before Feb 1) and I look forward to seeing you at NuCon 12!

The Goods: WebSockets Programming in .NET 4.5 and Windows Azure at That Conference

0
0

We’re just wrapping up day 2 of sessions at That Conference in Wisconsin Dells, WI and like yesterday, this has been a great day chock full of sessions, great conversations and meeting new people.

On Monday I had the opportunity to support Microsoft at their table by staffing an Ask the Experts slot on Windows Azure which was a great opportunity to talk to folks about Azure, Azure Service Bus, etc. and handing out drink tickets to those with the best question.

I was also flattered to be interviewed by Russ Fustino for ComponentOne. It was great catching up with Russ- a true legend in the Microsoft developer community!

I am really impressed by the developer scene here in the Midwest with developers from all languages and platforms coming together to invest in themselves, their organization and most of all their community for 3 days at the Kalahari Resort. Big shout out to Scott Seely , Clark Sell and the legions of invisible people behind the scenes for making this a great inaugural event.

On that note, I’d like to thank everyone who attended my talk on WebSockets in .NET 4.5 and Windows Azure.

I essentially reprised my content from Azure Connections in Las Vegas this Spring, with updates to Visual Studio 2012 RC and Windows Server 2012.

Please take a look at my post recapping the content if you want more details but be sure to take the bits posted below instead if you are targeting the RC versions of VS 2012 and Windows Server 2012/Windows 8 as the code samples have changed to align with the RC wave:

 

Demo

Summary

Goods
Demo 1

Live chat sample of Silverlight-based client and WCF Service running on Windows Azure.

Please note that this implementation is deprecated and will not be carried forward.

Instead, please use .NET 4.5 WebSocket support in WCF and ASP.NET.

Sample:

http://html5labs.cloudapp.net/WebSockets/ChatDemo/wsdemo.html

image

Demo 2

Simple “Hello World” example of ASP.NET ASHX handler using WebSocketHandler and HTML 5 client demonstrating a trivial “echo” service that displays the date/time each second.

Also included in the Demo 2 folder is a WCF version of the same implementation (which I did not demo during my talk).

 

Projects:


SimpleEventingSample
SimpleEventingService

Requires Visual Studio 2012 RC & Windows 8/Windows Server 2012 RC/RP

image

Demo 3

Example of using the Twitter Search API as an event stream with WCF using  WebSocketService, Linq to Twitter and HTML 5 with some nice JQuery and CSS animation.

Projects:


StatusStreamClient
StatusStreamService
StatusStreamServiceTests

Requires Visual Studio 2012 RC & Windows 8/Windows Server 2012 RC/RP

SNAGHTML4ba4793

Demo 4

Another event streaming example, this time using the Twitter Streaming API, Node.js and WebSocket.IO in Windows Azure and HTML 5 animations with CSS 3 box shadow and rotate.

As opposed to the Twitter Search API used in Demo 3, you can see that events are immediately captured and the Streaming API is much more reliable than the Search API.

image

 

On a side note, the next issue of CODE Magazine (Sept/Oct 2012) will include a complete, step by step walkthrough of everything you saw in the demo so if you are interested, please check it out and let me know what you think!

Thanks again for attending my talk and please share any comments/feedback questions by commenting below.

Speaking at AzureConf on Channel 9 Next Week

0
0

I am flattered to share that I’ve been invited to speak at AzureConf one week from today at Channel 9 Studios in Redmond on Wed. 11/14.AzureConf Logo

AzureConf is a premier live streamed event delivered for the community by the community from Channel 9 Studios on the Microsoft Campus in Redmond, WA.

 Brady Gaster and Corey Fowler have been hard at work for several weeks organizing content, logistics and I can tell you that I am both excited and humbled by the speaker line-up and content.

The event will kick off with a keynote presentation by Scott Guthrie, along with numerous sessions executed by Windows Azure community members including my friend, colleague and fellow (Azure) MVP Michael Collier and esteemed MVPs and Insiders flying in from all over the country/world to join Scott and team at the Channel 9 studios including Magnus Martensson and Eric Boyd, just to name a few.

Streamed live for an online audience on Channel 9, the event will allow you to see how customers, partners and MVPs are making the most of our skills to develop a variety of innovative applications on Windows Azure. The goal of the conference is to be just as valuable to seasoned Azure developers and architects as well as those just learning the tremendous power of this exciting platform.

You can learn more about AzureConf by visiting http://www.windowsazureconf.net/. Please be sure to register as capacity for the live streamed event will be limited (however sessions will be available for playback following the conference).

Thank you for your interest and please help spread the word!

MS1621_Banner_960x60_r06


Azure Conf Session is Up

0
0

I had a total blast delivering my session on WebSockets on Windows Azure at AzureConf on Wednesday and am pleased to share that the recording of my session is now up.

You can view it by clicking below or follow this link to view on Channel 9 along with a number of other fantastic sessions that highlight how MVPs are using Windows Azure today.

image

Thanks again to Brady, Cory and team for an amazing event!

Interview on Magnanimous Software Net Cast

0
0
image

I had the honor of being interviewed by fellow MVP Magnus Mastersson (@noopman) for his Magnanimous Software Podcast (love that name).

Other than the dubious task of following really smart guys like Glenn Block and Mads Torgersen in this new series, we had a good chat about Neuron ESB, Azure Service Bus, BizTalk Server 2013, my book and other topics. In addition, Magnus managed to uncover some little known tidbits about my past Smile

The interview was a lot of fun and is now available here for your listening pleasure: http://msnetcast.com/0003/rick-garibay-wcf-biztalk-servicebus-book

Links from the show:

Thanks Magnus!

Introducing the Neuron Azure Service Bus Adapter for Neuron 3.0

0
0

Anyone who knows me knows that I’m a messaging nerd. I love messaging so much, that I all but gave up web development years ago to focus exclusively in the completely unglamorous spaNeuron_Logo_3_Gray_and_Blue_PNGce of messaging, integration and middleware. What drives me to this space? Why not spend my time and focus my career on building sexy Web or device apps that are much more fashionable and that will allow people to actually see something tangible, that they can see, touch and feel?

These are questions I ponder often, but every time I do, an opportunity presents itself to apply my passion for messaging and integration in new and interesting ways that have a pretty major impact for my clients and the industry as a whole. Some recent examples of projects I led and coded on include the Intelligent Transportation and Gaming space including developing an automated gate management solution to better secure commercial vehicles for major carriers when they’re off the road; integrating slot machines for a major casino on the Vegas strip with other amenities on property to create an ambient customer experience and increasing the safety of our highways by reading license plates and pushing messages to and from the cloud. These are just a few recent examples of the ways in which messaging plays an integral role in building highly compelling and interesting solutions that otherwise wouldn’t be possible. Every day, my amazing team at Neudesic is involved in designing and developing solutions on the Microsoft integration platform that have truly game changing business impacts for our clients.

As hybrid cloud continues to prove itself as the most pragmatic approach for taking advantage of the scale and performance of cloud computing, the need for messaging and integration becomes only more important. Two technologies that fit particularly well in this space are Neuron and Azure Service Bus. I won’t take too much time providing an overview of each here as there are plenty of good write ups out there that do a fine job, but I do want to share some exciting news that I hope you will find interesting if you are building hybrid solutions today and/or working with Azure Service Bus or Neuron.

Over the last year, the Neuron team at Neudesic has been hard at work cranking out what I think is the most significant release since version 1.0 which I started working with back in 2007 and I’m thrilled to share that as of today, Neuron 3.0 is live!

Building on top of an already super solid WCF 4.0 foundation, Neuron 3.0 is a huge release for both Neudesic and our clients, introducing a ton of new features including:

 

  • Full Platform support for Microsoft .NET 4/LINQ, Visual Studio 2010/2012
  • New features in Management and Administration including
    • New User Interface Experience
    • Queue Management
    • Server and Instance Management
    • Dependency Viewers
  • New features in Deployment and ConfigurationManagement including
    • New Neuron ESB Configuration storage
    • Multi Developer support
    • Incremental Deployment
    • Command line Deployment
  • New features in Business Process Designer including
    • Referencing External Assemblies
    • Zoom, Cut, Copy and Paste
    • New Process Steps
      • Duplicate Message Detection
      • For Each loop
      • ODBC
  • New Custom Process Steps including
    • Interface for Controlling UI Properties
    • Folder hierarchy for UI display

  • New features in Neuron Auditing including
    • Microsoft SQL Azure
    • Excluding Body and Custom Properties
    • Failed Message Monitoring
  • New Messaging features including
    • AMQP Powered Topics with Rabbit MQ
    • Improved MSMQ Topic Support
    • Adapters
      • POP3 and Microsoft Exchange Adapters
      • ODBC Adapter enhancements
      • Azure Service Bus Adapter
  • New in Service Broker including
    • REST enhancements
    • REST support for Service Policies
    • WSDL support for hosted SOAP services
  • Many enhancements to UI, bug fixes and improvements to overall user experience.
image

In version 2.6, I worked with the team to bring Azure Service Bus Relay Messaging in as a first-class capability. Since Neuron is built on .NET and WCF, and the relay service is exposed very nicely using the WCF programming model, adding the relay bindings to Neuron’s Service Endpoint feature was a no-brainer. This immediately provided the ability to bridge or extend the on-premise pub-sub messaging, transformation, mediation, enrichment and security capabilities with Azure Service Bus Relay, enabling new, highly innovative hybrid solutions imagefor my team and our customers.

Between then and this new release, Microsoft released support for queues and topics also known as Brokered Messaging. These capabilities introduced the ability to model durable, pull-based pub-sub messaging in scenarios where such a brokered mechanism makes sense. To be clear, Brokered Messaging is not a replacement for Relay- in fact we’ve worked on a number of solutions where both the firewall friendly push messaging capabilities of relay fit  and even compliment certain scenarios (notification first pull-based pub-sub is a very handy dandy messaging pattern where both are used and perhaps I’ll write that up some day). Think of each being tools in your hybrid cloud messaging tool box.

It didn’t take long to see the potential of these additions to Azure Service Bus and I started having discussions with the Neuron team at Neudesic and the Azure Service Bus team at Microsoft about building an adapter that like Relay, would bring Brokered Messaging capabilities to Neuron, enabling a complete, rich spectrum of hybrid messaging capabilities.

Luckily, both teams agreed it was a good idea and Neudesic was nice enough to let me write the adapter.

Obviously, as a messaging nerd, this was an incredibly fun project to work on and after just a couple of hours, I had my first spike up and running on a very early build of Neuron 3.0 which demonstrated pushing a message that was published to Neuron and re-published on an Azure Service Bus topic. 7 major milestones later, a number of internal demos, walkthroughs with the Service Bus Team and a ton of load and performance testing I completed what is now the initial release of the Neuron Azure Service Bus Adapter which ships with Neuron 3.0!

What follows is a lap around the core functionality of the adapter largely taken from the product documentation that ships with Neuron 3.0. I hope you will find the adapter interesting enough to take a closer look and even if hybrid cloud is not on your mind, there are literally hundreds of reasons to consider Neuron ESB for your messaging needs.

Overview

Windows Azure Service Bus is a Platform as a Service (PaaS) capability provided by Microsoft that provides a highly robust messaging fabric hosted by Microsoft Windows Azure.

Azure Service Bus extends on-premise messaging fabrics such as Neuron ESB by providing pub-sub messaging capable of traversing firewalls, a taxonomy for projecting entities and very simple orchestration capabilities via rules and actions.

As shown below, Azure Service Bus bridges on-premise messaging capabilities enabling the ability to develop hybrid cloud applications that integrate with external services and service providers that are located behind the firewall allowing a new, modern breed of compositions to transcend traditional network, security and business boundaries.

clip_image002

Bridging ESBs in Hybrid Clouds – Azure Service Bus extends on-premise messaging fabrics such as Neuron ESB enabling a next generation of hybrid cloud applications that transcend traditional network, security and business boundaries.

There are two services supported by Azure Service Bus:

  • Azure Service Bus Relay: Serves as a push-based relay between two (or more) endpoints. A client and service (or services) establish an outbound, bi-directional socket connection over either TCP or HTTP on the relay and thus, messages from the client tunnel their way through the relay to the service. In this way, both the client and service are really peers on the same messaging fabric.

 

  • Azure Service Bus Brokered Messaging: Provides a pull-based durable message broker that supports queues, topics and subscriptions. A party wishing to send messages to Azure Service Bus establishes a TCP or HTTP connection to a queue or topic and pushes messages to the entity. A party wishing to receive messages from Azure Service Bus establishes a TCP or HTP connection and pulls messages from a queue or subscription.

Neuron ESB 3.0 supports both Azure Service Bus services and this topic focuses on support of Azure Service Bus Brokered Messaging via the Neuron Azure Service Bus Adapter.

For more information on support for Azure Service Bus Relay support, please see “Azure Service Bus Integration” in the “Service Endpoints” topic in the Neuron ESB 3.0 product documentation.

About the Neuron Azure Service Bus Adapter

The Neuron Azure Service Bus Adapter provides full support for the latest capabilities provided by the Windows Azure SDK version 1.7.

Once the Neuron Azure Service Bus adapter is registered and an Adapter Endpoint is created, all configuration is managed through the property grid of the Adapter located on the properties tab of the Adapter Endpoint’s Details Pane:

clip_image004

Neuron Azure Service Bus Adapter – Property Grid – All configurations for adapter is managed through the property grid. Properties are divided into 3 sections, General, Publish Mode Properties, and Subscribe Mode Properties.

Please note that in order to connect to an Azure Service Bus entity with the Neuron Azure Service Bus adapter, you need to sign up for an Azure account and create an Azure Service Bus namespace with the required entities and ACS configuration. For more information, visit http://azure.com

Features

The Neuron Azure Service Bus adapter supports the following Azure Service Bus Brokered Messaging features:

  • Send to Azure Service Bus Queue
  • Send to Azure Service Bus Topic
  • Receive from Azure Service Bus Queue
  • Receive from Azure Service Bus Subscription

In addition, the Neuron Azure Service Bus adapter simplifies the development experience by providing additional capabilities typical in production scenarios without the need to write custom code including:

  • Smart Polling
  • Eventual Consistency
  • Transient Error Detection and Retry

The Neuron Azure Service Bus adapter is installed as part of the core Neuron ESB installation. The adapter is packaged into a single assembly located within the \Adapters folder under the root of the default Neuron ESB installation directory:

· Neuron.Esb.Adapters.AzureServiceBusAdapter.dll

In addition, the following assembly is required and automatically installed in the root of the folder created for the service instance name:

· Microsoft.ServiceBus.dll (Azure SDK version 1.7)

To use the adapter, it must first be registered within the Neuron ESB Explorer Adapter Registration Window. Within the Adapter Registration Window, the adapter will appear with the name “Azure Service Bus Adapter”. Once registered, a new Adapter Endpoint can be created and configured with an instance name of your choice:

clip_image006

Neuron ESB Explorer Adapter Registration Window - Property Grid – Before configuring the adapter instance for Publish or Subscribe mode, the adapter must first be registered.

Supported Modes

Once the initial registration is complete, the Neuron Azure Service Bus adapter can be configured in one of 2 modes: Publish and Subscribe.

Publish

Publish mode allows Neuron ESB to monitor an Azure Service Bus Queue or Subscription by regularly polling, de-queuing all the messages, and publishing those messages to a Neuron ESB Topic. Messages are read synchronously via a one-way MEP.

clip_image008

Receiving Messages from Azure Service Bus – When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.

Configuration

Configuring the Publish mode of the Neuron Azure Service Bus adapter requires that minimally, the following properties are set:

General Properties
  • Azure Service Bus Namespace Name - A registered namespace on Azure Service Bus. For example 'neudesic' would be the namespace for: sb://neudesic.servicebus.windows.net (for information on how to provision, configure and manage Azure Service Bus namespaces, please see the Azure Service Bus topic on http://azure.com).
  • Azure ACS Issuer Name – The account/claim name for authenticating to the Windows Azure Access Control Service (ACS - For information on how to provision, configure and manage Azure Access Control namespaces, please see the Azure Access Control topic on http://azure.com).
  • Azure ACS Key – The shared key used in conjunction with Azure ACS Issuer Name.
  • Azure Entity Type - Queue or Subscription
  • Azure Channel Type – Default, if outbound TCP port 9354 is open or HTTP to force communication over HTTP port 80/443 (In Default mode, the Neuron Azure Service Bus Adapter will try to connect via TCP. If outbound TCP port 9354 is not open, choose HTTP).
  • Retry Count - The number of Service Bus operations retries to attempt in the event of a transient error (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
  • Minimum Back-Off - The minimum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
  • Maximum Back-Off - The maximum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
Publish Properties
  • Azure Queue Name- The name of the queue that you want to receive messages from (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
  • Azure Topic Name – The name of the topic that the subscription you want to receive messages from is associated with (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
  • Azure Subscription Name - The name of the subscription you want to receive messages from (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
  • Delete After Receive – False by default. If set to True, deletes the message from the queue or topic after it is received regardless of whether it is published to Neuron successfully (for more information on this setting, see the “Understanding Eventual Consistency” topic).
  • Wait Duration - Duration (in seconds) to wait for a message on the queue or subscription to arrive before completing the poll request (for more information on this setting, see the “Understanding Smart Polling” topic).
  • Neuron Publish Topic - The Neuron topic that messages will be published to. Required for Publish mode.
  • Error Reporting – Determines how all errors are reported in the Windows Event Log and Neuron Logs. Either as Errors, Warnings or Information.
  • Error on Polling – Register failed message and exception with Neuron Audit database. Please note that a valid SQL Server database must be configured and enabled.
  • Audit Message on Failure - Determines if polling of data source continues on error and if consecutive errors are reported.

The following shows the General configuration for an instance of the Neuron Azure Service Bus adapter called “Azure - Receive” in Publish mode:

clip_image010

Publish Mode General Configuration– When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.

The following shows the Properties configuration for a fully configured instance of the Neuron Azure Service Bus adapter in Publish mode:

clip_image012

Publish Mode Properties Configuration– When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.

Subscribe

Subscribe mode allows Neuron ESB to write messages that are published to Neuron ESB to an Azure Service Bus queue or topic. In this manner, Neuron ESB supports the ability to bridge an Azure Service Bus entity, allowing for on-premise parties to seamlessly communicate with Azure Service Bus. Once Neuron ESB receives a message, it sends the message to an Azure Service Bus Queue or Topic.

clip_image014

Sending Messages to Azure Service Bus – When in Subscribe mode, the adapter supports sending messages published on Neuron ESB to an Azure Service Bus entity.

Configuration

In addition to the General Properties covered under the Publish mode documentation, configuring the Subscribe mode of the Neuron Azure Service Bus adapter requires that minimally, the following properties are set:

Subscribe Properties
  • Adapter Send Mode - Choose Asynchronous for maximum throughput or Synchronous for maximum reliability (for more information on this setting, see the “Choosing Synchronous vs. Asynchronous” topic).
  • Adapter Queue Name - The name of the queue you want to send messages to (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
  • Adapter Topic Name - The name of the topic you want to send messages to (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).

The following shows the General configuration for an instance of the Neuron Azure Service Bus adapter called “Azure - Send” in Subscribe mode:

clip_image016

Subscribe Mode General Configuration– When in Subscribe mode, the adapter supports sending messages from Neuron ESB to an Azure Service Bus entity.

The following shows the Properties configuration for a fully configured instance of the Neuron Azure Service Bus adapter in Subscribe mode:

clip_image018

Subscribe Mode General Configuration– When in Subscribe mode, the adapter supports sending messages from Neuron ESB to an Azure Service Bus entity.

Understanding Transient Error Detection and Retry

When working with services in general and multi-tenant PaaS services in particular, it is important to understand that in order to scale to virtually hundreds of thousands of users/applications, most services like Azure Service Bus, SQL Azure, etc. implement a throttling mechanism to ensure that the service remains available.

This is particularly important when you have a process or application that is sending or receiving a high volume of messages because in these cases, there is a high likelihood that Azure Service Bus will throttle one or several requests. When this happens, a fault/HTTP error code is returned and it is important for your application to be able to detect this fault and attempt to remediate accordingly.

Unfortunately, throttle faults are not the only errors that can occur. As with any service, security, connection and other unforeseen errors (exceptions) can and will occur, so the challenge becomes not only being able to identify the type of fault, but in addition, know what steps should be attempted to remediate.

Per the guidance provided by the Azure Customer Advisory Team (http://windowsazurecat.com/2010/10/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications/), the Neuron Azure Service Bus adapter uses an exponential back-off based on the values provided for the Retry Count, Minimum Back-Off and Maximum Back-Off properties within the Properties tab for both Publish and Subscribe mode.

Given a value of 3 retries, two seconds and ten seconds respectively, the adapter will automatically determine a value between two and ten and back off exponentially one time for each retry configured:

clip_image020

Exponential Back-Off Configuration– The adapter will automatically detect transient exceptions/faults and retry by implementing an exponential back-off algorithm given a retry count, initial aclip_image022nd max back-off configuration.

Taking this example, as shown in the figure on the right, if the adapter chose an initial back-off of two seconds, in the event of a transient fault being detected (i.e. throttle, timeout, etc.) the adapter would wait two seconds before trying the operation again (i.e. sending or receiving a message) and exponentially increment the starting value until either the transient error disappears or the retry count is exceeded.

In the event that the retry count is exceeded, the Neuron Azure Service Bus adapter will automatically persist a copy of the message in the audit database to ensure that no messages are lost (provided a SQL Server database has been configured).

Understanding Smart Polling

When imageconfiguring the Neuron Azure Service Bus Adapter in Publish mode, the adapter can take advantage of a Neuron ESB feature known as Smart Polling.

With Smart Polling, the adapter will connect to an Azure Service Bus queue or subscription and check for messages. If one or message is available, all messages will be immediately delivered (see “Understanding Eventual Consistency” for more information on supported read behaviors).

However, if no messages are available, the adapter will open a connection to the Azure Service Bus entity and wait for a specified timeout before attempting to initiate another poll request (essentially resulting in a long-polling behavior). In this manner, Azure Service Bus quotas are honored while ensuring that the adapter issues a receive request only when the configured timeout occurs as opposed to repeatedly polling the Azure Service Bus entity.

Understanding Eventual Consistency

When working with Azure Service Bus, it is important to note that the model for achieving consistency is different than traditional distributed transaction models. For example, when working with modern relational databases or spanning multiple services that are composed into a logical unit of work (using WS-Atomic Transactions for example), it is a common expectation that work will either be performed completely or not at all. These types of transactions have the characteristics of being atomic, consistent, independent and durable (ACID). However, to achieve this level of consistency, a resource manager is required to coordinate the work being carried out by each service/database that participates in a logical transaction.

Unfortunately, given the virtually unlimited scale of the web and cloud computing, it is impossible to deploy enough resource managers to account for the hundreds of thousands if not millions of resources required to achieve this level of consistency. Even if this were possible, the implications on achieving the scale and performance demanded by modern cloud-scale applications would be physically impossible.

Of course, consistency is still as important for applications that participate in logical transactions across or consume cloud services. An alternative approach is to leverage an eventually consistent, or basically available, soft state, eventually consistent (BASE) approach to transactions.

Ensuring Eventual Consistency in Publish Modeimage

Azure Service Bus supports this model for scenarios that require consistency and the Neuron Azure Serviced Bus adapter makes taking advantage of this capability simply a matter of setting the “Delete After Receive” property (available in the Publish Mode Settings) to False, which is the default.

When set to False, when receiving a message, the adapter will ensure that the message is not discarded from the Azure Service Bus entity until the message has been successfully published to Neuron ESB. In the event that an error occurs when attempting to publish a message, the message will be restored on the Azure Service Bus entity ensuring that it remains available for a subsequent attempt to receive the message (Please note that lock durations configured on the entity will affect the behavior of this feature. For more information, please refer to the Azure Service Bus documentation on MSDN: http://msdn.microsoft.com/en-us/library/ee732537.aspx).

Choosing Synchronous versus Asynchronous Receive

When the Neuron Azure Service Bus adapter is configured in Subscribe mode, you can choose to send messages to an Azure Service Bus queue or topic in either synchronous or asynchronous mode by setting the Adapter Send Mode property to either “Asynchronous” or “Synchronous” in the Subscribe Mode Property group.

If reliability is a top priority such that the possibility of message loss cannot be tolerated, it is recommended that you choose Synchronous. In this mode, the adapter will transmit messages to an Azure Service Bus queue or topic at rate of about 4 or 5 per second. While it is possible to increase this throughput by adding additional adapters in subscribe mode, as a general rule, use this mode when choosing reliability at the expense of performance/throughput.

To contrast, if performance/low-latency/throughput is a top priority, configuring the adapter to send asynchronously will result in significantly higher throughput (by several orders of magnitude). While the send performance in this mode is much higher, in the event of a catastrophic failure (server crash, out of memory exception) it is possible for messages that have left the Neuron ESB process but have not yet been transmitted to the Azure Service Bus (i.e. are in memory) the possibility for message loss is much higher than when in synchronous mode because of the significantly higher density of messages being transmitted.

Other Scenarios

Temporal Decoupling

clip_image024One of the benefits of any queue-based messaging pattern is that the publisher/producer is decoupled from the subscribers/consumers. As a result, parties interested in a given message can be added and removed without any knowledge of the publisher/producer.

By persisting the message until an interested party receives the message, the sending party is further decoupled from the receiving party because the receiving party need not be available at the time the message was written to persistence store. Azure Service Bus supports temporal decoupling with both queues and topics because they are durable entities.clip_image026

As a result, a party that writes new order messages to an Azure Service Bus queue can do so uninhibitedly as shown below:

When you configure an instance of the Neuron Azure Service Bus adapter in Publish mode, you can disable the adapter by unchecking the “Enabled” box. Any new messages written to the Azure Service Bus queue or subscription will persist until the adapter is enabled once again.

Competing Consumers

Another messaging pattern that allows you to take advantage of the benefits of pull-based pub-sub model from a performance and scalability perspective is to adjust the number of consumers supported by the resources available to you and keep adding consumers until throughput requirements are met.

To take advantage of this pattern with the Neuron Azure Service Bus adapter and Azure Service Bus, simply add additional instances of the Publishing adapter as needed:

clip_image028

Competing Consumers –Adding additional consumers with Neuron Azure Service Bus is simply a matter of adding additional instances of the Publishing adapter.

Property Table

 

The following table provides details for each property exposed through the Neuron Explorer UI:

Section Name

Property Name

Required

Description

General

  

These properties are used for all modes of the adapter

 

Azure Service Bus Namespace Name

Yes

A registered namespace on Azure Service Bus. For example 'neudesic' would be the namespace for: sb://neudesic.servicebus.windows.net

 

Azure ACS Issuer Name

Yes

The account/claim name for authenticating to the Windows Azure Access Control Service (ACS)

 

Azure ACS Key

Yes

The shared key used in conjunction with Azure ACS Issuer Name.

 

Azure Entity Type

Yes

Default Queue. Queue or Topic

 

Azure Channel Type

Yes

Default is Default. Default, if outbound TCP port 9354 is open or HTTP to force communication over HTTP port 80/443

 

Retry Count

Yes

Default 5. The number of Service Bus operations retries to attempt in the event of a transient error (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).

 

Minimum Back Off

Yes

Default 3. The minimum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).

 

Maximum Back Off

Yes

Default 3. The maximum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).

Publish Properties

  

These properties are only used when the adapter is in either Request/Response or Publish mode.

Azure Queue Name

Yes

The name of the queue that you want to receive messages from (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).

Azure Topic Name

Yes

The name of the topic that the subscription you want to receive messages from is associated with (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).

Azure Subscription Name

Yes

The name of the subscription you want to receive messages from (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).

Delete After Receive

Yes

Default False. If set to True, deletes the message from the queue or topic after it is received regardless of whether it is published to Neuron successfully (for more information on this setting, see the “Understanding Eventual

Wait Duration

Yes

Default 5. Duration (in seconds) to wait for a message on the queue or subscription to arrive before completing the poll request (for more information on this setting, see the “Understanding Smart Polling” topic).

Neuron Publish Topic

Yes

The Neuron topic that messages will be published to. Required for Publish mode.

Error Reporting

Yes

Default Error. Determines how all errors are reported in the Windows Event Log and Neuron Logs. Either as Errors, Warnings or Information.

Error on Polling

Yes

Default Stop Polling On Error. Register failed message and exception with Neuron Audit database. Please note that a valid SQL Server database must be configured and enabled.

Audit Message on Failure

Yes

Default False. Determines if polling of data source continues on error and if consecutive errors are reported.

Subscribe Properties

  

These properties are only used when the adapter is in either Solicit/Response or Request/Response mode.

Adapter Send Mode

Yes

Default Asynchronous. Choose Asynchronous for maximum throughput or Synchronous for maximum reliability (for more information on this setting, see the “Choosing Synchronous vs. Asynchronous” topic).

Adapter Queue Name

Yes

The name of the queue you want to send messages to (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).

Adapter Topic Name

Yes

The name of the topic you want to send messages to (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).

Message Format

Azure Service Bus uses a proprietary message envelope called a Brokered Message as the unit of communication between all messaging entities including queues, topics and subscriptions.

Publish Mode

In Publish mode, the Neuron Azure Service Bus Adapter will automatically map the body of the incoming Brokered Message to the Body property of the Neuron ESBMessage serializing the payload based on the detected encoding type as follows:

 

BrokeredMessage.ContentType

ESBMessage.Header.BodyType

text/plain

text/plain

text/xml

text/xml

application/msbin-1

application/msbin-1

binary/bytes

binary/bytes

Other

text/xml

Note per the table above that unless otherwise specified, the Neuron Azure Service Bus adapter will assume that the incoming message payload is text/xml.

In addition, any properties stored in the Property property bag of the BrokeredMessage will be automatically mapped to the ESBMessage property bag provided the “Include Metadata” option is checked on the General tab in the Adapter Endpoints configuration. An exception to this rule is that the adapter will always map the BrokeredMessage.LockToken to the ESBMessage property bag with the same name regardless of whether “Include Metadata” is checked.

Subscribe Mode

In Subscribe mode, the Neuron Azure Service Bus Adapter will automatically create a new Brokered Message for each transmission and map the body of an outgoing ESBMessage to the new message body as follows:

 

ESBMessage.Header.BodyType

BrokeredMessage.ContentType

text/plain

text/plain

text/xml

text/xml

application/msbin-1

application/msbin-1

binary/bytes

binary/bytes

Other

text/xml

In addition, any properties stored in the Property property bag of the ESBMessage will be automatically mapped to the BrokeredMessage property bag provided the “Include Metadata” option is checked on the General tab in the Adapter Endpoints configuration.

Brokered Message Limitations

Note that the total payload size for Azure Service Bus messages is 256KB. The Neuron Azure Service Bus adapter will throw a runtime exception if a message greater than or equal to 256KB is sent and will save the message to the failed audit table.

Wrapping Up

Thanks for your interest and please don’t hesitate to hit me with questions, comments and feedback. If you see something missing, I’d love to hear from you as we are already starting to think about features for v.Next.

I had a ton of fun writing this adapter and would like to that the Neuron product team for allowing me to make this small contribution to this incredible release.

This adapter is just a small part of this major release and I hope this post has peeked your interest in checking out Neuron ESB. Getting up and running is super simple and you can download the trial bits here: http://products.neudesic.com/

Global Windows Azure BootCamp– Phoenix 4/27

0
0

The rumors are true. The Global Windows Azure Bootcamp is coming to Phoenix on April 27th, 2013. Registration is now open: https://phxglobalazurebootcamp.eventday.com/ clip_image001[9]

This is a truly global event in which the Phoenix community will come together to share and learn what you can do on Windows Azure.

This one day deep dive class will get you up to speed on developing for and deploying to Windows Azure. The class will be led by myself and fellow MVPs including the one and only Joe Guadagno, Dan Wahlin and you’re friendly neighborhood Microsoft Regional Director Scott Cate. You’re guaranteed to learn a ton and in addition to the talks, you’ll work on some great hands on labs so you can apply what you learn on the same day and take the labs with you! Best of all, if you get stuck, we’ll be there to make you do push ups, I mean get unblocked Winking smile

AGENDA

We will start at 9:00 with welcome and introductions and get right into an end-to-end overview of Windows Azure. From there, we’ll participate in a massive, coordinated, global deployment to Windows Azure, teaming with over 60 other worldwide locations to see the workload in action (details are super secret, so you have to show up to get your security clearance- this a a boot camp after all)!

After we’ve done our best to take down a datacenter, we’ll take a break and take a lap around storage and database options on Windows Azure while we enjoy a catered lunch kindly sponsored by Microsoft. We’ll also have fresh primo coffee, sodas, waters and snacks to help you power through the labs which will give you real-world exposure to what its like to actually apply what you’ve learned and take the working applications home with you.

From there we’ll take another break and wrap up the day with a look at how Windows Azure Service Bus changes how you think about messaging and integration when working at cloud scale. We’ll have a Service Bus lab and from there likely plan some nefarious after event activities at one of downtown Chandler’s fine watering holes.

Here are the details:

Activity

Time

Welcome and Introductions (15 mins)

9:00 – 9:15

Windows Azure Overview (75 minutes)

9:15 – 10:30

Deploy to the cloud! (45 minutes)

10:30 – 11:15

Break (15 minutes)

11:15 – 11:30

Windows Azure Storage and Database (90 minutes)

11:30 – 1:00

Hands On labs (2 hours)

1:00 – 2:00

Break (15 minutes)

2:00 - 2:15

Windows Azure Service Bus - (90 minutes)

2:15 – 3:45

Wrap Up

3:45 – 5:00

HOW MUCH DOES BOOTCAMP COST?

This event is FREE to the attendees. Gratis! Gratuite! Libero! We’ll certainly take any good karma you want to send our way, but your attendance and full engagement is all we ask. Be sure to check out the prerequisites to ensure you are ready to rock.

DO I NEED TO BRING ANYTHING?

This is a BYOL event. To get the most of the event, you will want to come to boot camp with your own laptop pre-loaded with Visual Studio, the Azure SDK and all prerequisites. Please see http://globalwindowsazure.azurewebsites.net/?page_id=171 to download and install everything you’ll need to make this a great event.

BUT, I’M COMPELTELY NEW TO THIS AZURE CLOUD THING

This event is for you! We’ll have a mix of content both for experienced developers and those brand spanking new to Windows Azure. Our trainers will be here to answer all of your questions and help you with the labs, so remember, there are no stupid questions.

BUT, I ALREADY KNOW THIS STUFFbootcamp

Awesome! We’d love to have you as you’ll probably teach us a thing or two and we guarantee you’ll walk away learning a few things too!

LOCATION, LOCATION, LOCATION

Boot camp will be held at Gangplank in Chandler, located at 260 South Arizona Avenue | CHANDLER, AZ 85225

WHAT’S NEXT?

Seating is limited for this event so please register now at https://phxglobalazurebootcamp.eventday.com/ to guarantee your seat and help us plan for coffee, drinks, snacks and lunch.

WABS BizTalk Adapter Service Installation in Seven Steps

0
0

With the proliferation of devices and clouds, businesses and developers are challenged more than ever to both enable employee productivity and take advantage of the cost benefits of cloud computing. The reality however, is that the vast majority of organizations are going to continue to invest in assets that reside both within their own data center and public clouds like Windows Azure and Amazon Web Services.

Windows Azure BizTalk Services (WABS) is a new PaaS based messaging and middleware solution that enables the ability to expose rich messaging endpoints across business assets, whether they reside on-premise or in the commercial cloud.

WABS requires an active Windows Azure account, and from there, you can provision your own namespace and start building rich messaging solutions using Visual Studio 2012. You can download everything you need to get started with WABS here: http://www.microsoft.com/en-us/download/details.aspx?id=39087

Once your WABS namespace has been provisioned, you are ready to start developing modern, rich messaging solutions. At this point, you can experiment with sending messages to a new messaging entity in Windows Azure called an EAI Bridge and routing them to various destinations including Azure Service Bus, Blog Storage, FTP, etc. However, if you want to enable support for connectivity to on-premise assets including popular database platforms like Microsoft SQL Server and Oracle Database as well as ERP systems such as Oracle E-Business Suite, SAP and Siebel eBusiness Applications, you want to install an optional component called the BizTalk Adapter Service (BAS) which runs on-premise.

The BAS includes a management and runtime component for configuring and enabling integration with your LOB systems. The capabilities are partitioned into a design-time experience, a configuration experience and the runtime. At design time, you configure your LOB Target (i.e. SQL Server, Oracle DB, SAP, etc.) for connecting to your LOB application via a LOB Relay. Built on Windows Azure Service Bus Relay Messaging, the LOB Relay allows you to establish a secure, outbound connection to the WABS Bridge which safely enables bi-directional communication between WABS and your LOB target through the firewall.

More details on the BizTalk Adapter Service (BAS) architecture can be found here: http://msdn.microsoft.com/en-us/library/windowsazure/hh689773.aspx

While the installation experience is fairly straightforward, there are a few gotchas that can make things a bit frustrating. In this post, I’ll walk you through the process for installing and configuring BAS in hopes of getting you up and running in a breeze.

Installing the BizTalk Adapter Service

Before you install BAS, ensure you’ve downloaded and installed the following pre-requisites:

  • WCF LOB Adapter Framework (found on the BizTalk Server 2013 installation media)
  • BizTalk Adapter Pack 2013 (found on the BizTalk Server 2013 installation media)
  • IIS 7+ and WAS (I’ve tested installation on Windows 7 and Windows 8 Enterprise editions)
  • AppFabric 1.1 for Windows Server
  • SQL Server 2008 or 2012 (all editions should be supported including free Express versions)

The installation process will prompt you for key information including the account to run the application pool that will host the management and runtime services and a password for encrypting key settings that will be stored by the management service in SQL Server. Let’s take a look at the process step-by-step.

1. When you unpack the installer, the most common mistake your likely to make is to double click it to get started. Instead, open a command prompt as an administrator and run the following command (you’ll need to navigate to the folder in which you unpacked the MSI):


msiexec /i BizTalkAdapterService.msi /l*vx install_log.txt in

This command will ensure the MSI runs as Admin and will log results for you in case something goes wrong.

 

2. The first thing the installer will as you for is credentials for configuring the application pool identity for the BAS Management Service. This service is responsible for configuring LOB Relay and LOB Targets and stores all of the configuration on a repository hosted by SQL Server (Long Live Oslo!). In my case, I’ve created a local service account called svc-bas, but this of course could be a domain account or you can use the other options.

 

Install 1 creds

3. Before you continue, be sure that the account you are using to run the MSI is a member of the appropriate SQL Server role(s) unless you plan on using SQL Server Authentication in the next step. The wizard will create a repository called BAService so will need the necessary permissions to create the database.

4. Next, specify connection info for the SQL Server database that will host the BAService repository. SQL Express or any flavor of SQL Server 2008 or 2012 is supported.

 

Install 2 SQL 

5. Specify a key for encrypting sensitive repository information.

Install 3 master pass

 

6. The installer will then get to work creating the BAService in IIS/AppFabric and the BAService repository in SQL Server.

Install 5 creating databases

7. If all is well, you’ll see a successful completion message:

Install Complete

If the wizard fails, it will roll back the install without providing any indication as to why. If this happens, be sure to carefully follow steps 1 and 2 above and carefully review the logs to determine the problem.

After the installation is complete, you’ll notice the BAService has been created in IIS/AppFabric for Windows Server.

image

The BAService database consists of 4 tables which store information on the configured Azure Service Bus relay endpoints that communicate with the LOB Targets, the operations supported by each target (configured in Visual Studio) and finally the virtual URIs for addressing the BAService for configuring the entities previously mentioned:

image

At this point, the LobRelays, LobTargets and Operations tables will be empty.

Once you configure a LOB Target, the BAService will write the configuration data to each table, enabling Azure Service Bus Relay to fuse with the WCF LOB Adapters that ship with the BizTalk Adapter Pack. This combination enables very powerful rich messaging scenarios that support hybrid solutions exposing key business assets across traditional network, security and business boundaries in a simple and secure manner.

Announcing the 2nd Annual Global Windows Azure Bootcamp Phoenix!

0
0

Global Windows Azure Bootcamp

I am thrilled to announce the 2nd Annual GWAB which has been confirmed for Saturday, March 29th, 2014!

As of today, we have 80 locations in 76 cities spanning 41 countries!

For those of you who attended last year, you know what a blast we had writing and deploying code to Azure as part of hands on labs and our massive scale-out demo, the "Global Render Lab”. This exercise showed the power of distributed computing and we'll be doing something similar this year. 

We are still in discussions with a few different projects and will soon be deciding on a charity project to back.  The work the charity needs will then be packaged into a deployable solution that all the attendees can deploy during the event. 

Our hope is that as our attendees are using this to learn about how to deploy to Windows Azure and how distributed computing works, we can help solve some of the world's problems at the same time.

Agenda and SpeakersGWAB2014

We are still early in the planning process, but as with last year, the goal of this one day deep dive class is to get you up to speed on developing for Windows Azure. In fact, you will not only write code for Azure, you will also deploy several samples as well as participate in this year’s scale out project which will benefit a charity which will soon be announced.

The class includes presenters and trainers (mostly Microsoft MVPs) with deep, real world experience with Windows Azure, as well as a series of labs so you can practice what you just learned. In fact, I am pleased to announce that the following Microsoft MVPs and community rock stars have already signed on for this year’s event:

And we’re just getting started! If you’d like to volunteer, please contact me on Twitter @rickggaribay. We’re still looking for speakers, lab buddies, and help with promotion, sponsors, etc.

Awesome. How much does it cost?
This event is FREE to the attendees. Gratis! Gratuite! Libero!  However, seating is limited so be sure to register and secure your seat today: http://bit.ly/1gCdCZb 

What do I need to bring?
You will need to bring your own laptop and have it preloaded with the software that will soon be announced here (please check back or follow me on twitter for updates).

Please do the installation upfront as there will be no time to troubleshoot installations during the day.

You will also need to be signed up for a Windows Azure account. There are many options including a 100% absolutely free 30 day trial. Why not sign up now? http://www.windowsazure.com/en-us/pricing/free-trial/

Is this for beginners?
Yes and no. We will focus on a series of lectures and hands on labs aimed at level 200, but ad-hoc white boarding, deep scenario discussions and Q&A are all part of the fun. Think you already know it all? Great, we still need volunteers, speakers and lab buddies. Drop me a note on Twitter @rickggaribay

And now, for a little fun…

Big thanks to fellow MVPs Maarten Balliauw, Alan Smith, Michael Wood and Magnus Martensson for running this event as our global leaders. Thanks also to Scott Cate at Event Day for providing free registration hosting. We couldn’t do this without them!

Configuring Custom Domain Names on Windows Azure Websites in 4 Easy Steps

0
0

Windows Azure Websites (WAWS) provides a very robust, yet easy to use container for hosting your web applications. This doesn’t just pertain to ASP.NET apps, but includes several templates like Drupal, Wordpress, Orchard, etc. and also provides very nice first class support for Node.js web apps/APIs, PHP and Python.

If you are new to WAWS, you may think ‘big deal, this is just another web host’. You would be wrong. There is a TON of value that you get with WAWS that blows your congenital, commodity web hosters away:

  • The free version allows you to host up to 10 sites in a multi-tenant environment and provides a great dashboard, FTP and continuous deployment capabilities including first class support for git (local repos) and github.
  • The shared version adds support for seamlessly scaling your app up to 6 instances/nodes along with enabling Web Jobs which provide worker processes for executing jobs on a schedule, continuously or on-demand.
  • The standard version allows you to dedicate not instances, but full VMs to your application and supports auto-scaling your app based on metrics and triggers.

These are just the big rocks… there’s a ton more to WAWS and whether you are a .NET, Node.js, PHP or Python developer and there’s a ton of goodness to WAWS which you can learn more about here: http://www.windowsazure.com/en-us/documentation/services/web-sites/ 

When you create your WAWS application, you get both an IP and URL. The URL takes the form of [your app].azurewebsites.net. This is cool for development, testing and maybe corporate apps, but if you are building publically visible web apps or APIs, chances are you’ll want your own domain name so that instead of [your app].azurewebsites.net you can point your users to foobaz.com or whatever.

Microsoft has official docs on how to do this here, but I found that there was a lot of detail that might intimidate folks so I thought I’d break it down in 4 simple steps. I’ll assume that you’ve already bought your shiny new domain name from a registrar and that it’s parked at some annoying, ad infested landing page.

Step 1: Ensure your site is configured for shared or standard mode

Free doesn’t support custom domains which seems pretty reasonable to me. If you started with a website in free mode, simply click on the Scale option and choose from “Shared” or “Standard” mode and click OK:

image

Step 2: Copy the IP and WAWS URL

The next step is to make note of your URL and IP address which you’ll need for the third step in this process. Go to the list of WAWS sites, select the site (but don’t click on it) and click on the “Manage Domains” icon at the bottom of the command bar:

image

This will bring up a dialog that includes your current domain record ([your app].azurewebsites.net) and your IP:

image

Step 3: Update the A Record and CNAMEs

Make a note of each and login to your domain registrar’s console. You want to look for “DNS Management” and either “Advanced” or “Manage Zones” or “Manage DNS Zone File”. You want to get to whatever console allows you to configure your A Record and CNAMEs. I won’t get into a bunch of DNS theory here, but in a nutshell, these records allow for requests to your registered domain name to be forwarded to Windows Azure, and specifically your website’s host name. The result is that your website will resolve to both [your app].azurewebsites.net and foobaz.com (or whatever domain you purchased).

Each registrar will obviously look different, but this is what GoDaddy’s looks like (there’s several other entries like ftp, MX records, etc. which can be ignored):

image

The A record needs to point to the IP address you captured in step 2. Replace whatever value is there with the IP address provided. When someone calls up foobaz.com, your registrar will authoritatively answer that request and then pass it on directly to the IP address you provided.

Now there are various docs, posts, etc. that will tell you that you can choose to use an A name record or a CNAME alias but my experience was that I needed to configure both. If you want to try one or the other, go ahead and do so and skip to Step 4. If it doesn’t work, come back and do both (I had to).

For the CNAME, there are 3 entries you need to make:

  • Point www to [your app].azurewebsites.net – this tells DNS that [your app].azurewebsites.net should be the destination (canonical host) for any DNS queries that begin with www (i.e. www.foobaz.com)
  • Point awwverify AND awwverify.www to awwverify.[your app].azurewebsites.net – This provides for a DNS validation mechanism so that WAWS can validate that your domain registrar has been configured to allow WAWS to serve as a canonical domain in the event that a CNAME look up fails.

Be sure to save your file/settings.

Step 4: Enter your custom domain name in the Manage Domains dialog and check for validity

Pull up the “Domain Settings” for your website again, and this time, enter your new domain name (i.e. foobaz.com). If you want WAWS to respond to both www.foobaz.com and foobaz.com, you’ll want to create both entries. You’ll likely see a red dot indicating that validation and/or CNAME look up has failed:

 

Note that DNS can take up to 48 hours to propagate so as you move to this step, know that if it doesn’t immediately work, wait a few hours to a day and try again (Dynamic DNS providers solve this problem by acting as a proxy between your authoritative domain and canonical domains/IPs). It is very likely that you’ve done everything right, but the records have not yet propagated.

 

image

This is simply WAWS’ way of telling you that the records have not yet propagated. You can happily continue using your WAWS website using the [your app].azurewebsites.net URL. In time, when you come back to the dialog, the verification should succeed and any request for foobaz.com should automatically resolve to your WAWS app.

If you’ve followed these steps and still have issues after 24-48 hours, feel free to drop a comment or hit me on twitter @rickggaribay and I’ll be happy to help you out.


Speaking on Building APIs with NodeJS on Microsoft Azure Websites Next Tuesday, 4/15

0
0

I will be speaking at the Tucson .NET User Group next Tuesday on Building APIs with Node.js on Microsoft Azure Websites. This will be the 3rd time I speaking at this group, but first time I’m following Scott Hanselman (who spoke last month), definitely a tough act to follow!

You can learn more about the topic here: http://bit.ly/1hEzAJf

Visual Studio Live Chicago Recap: From the Internet of Things to Intelligent Systems - A Developer's Primer

0
0

I had the pleasure of presenting at Visual Studio Live! Chicago this week. Here is a recap of my second talk “From the Internet of Things to Intelligent Systems- A Developer’s Primer (if you’re looking for a recap of my “Building APIs with NodeJS on Microsoft Azure Websites” you can find it here).

While analysts and industry pundits can’t seem to agree on just how big IoT will be in the next 5 years, one thing they all agree on is that it will be big. From a bearish 50B internet connected devices by 2020, to a more moderate 75B and bullish 200B, all analysts agree that IoT is going to be big. But the reality is that IoT isn’t something that’s coming. It’s already here and this change is happening faster than anyone could have imagined. Microsoft predicts that by 2017, the entire space will represent over $1.7T in market opportunity spanning from manufacturing and energy to retail, healthcare and transportation.

While it is still very early, it is clear to see that the monetization opportunities at this level of scale are tremendous. As I discussed in my talk, the real opportunity for organizations across all industries is two-fold. First, the dataand analytical insights that the telemetry (voluntary data shared by the devices) will provide will change the way companies plan, execute and the rate at which they will adapt and adjust to changing conditions in their physical environments. This brings new meaning to decision support and no industry will be left untouched in this regard. These insights will lead to intelligent systems that are capable of taking action at a distancebased either on pre-configured rules that interpret this real-time device telemetry or other command and control logic that prompts communication with device.

As a somewhat trivial but useful example, imagine your coffee maker sending you an SMS asking you permission to approve a descaling job. Another popular example of a product that’s already had significant commercial success is the Nest thermostat. Using microcontrollers very similar to the ones I demonstrated, these are simple examples that are already possible today.

Beyond the commercial space, another very real example is a project my team led for our client that involved streaming meter and sensor telemetry from a large downtown metroplex enabling real-time, dynamic pricing, up-to-the-minute views into parking availability and significant cost and efficiency savings by adopting a directed enforcement approach to ticketing.

So, IoT is already everywhere and in many cases, as developers we’re already behind. For example, what patterns do you use for managing command and control operations? How do you approach addressability? How do you overcome resource constraints on devices ranging in size from drink coasters to postage stamps? How do you scale to hundreds and thousands of devices that are sharing telemetry data every few seconds? What about security?

While 75 minutes is not a ton of time to tackle all of these questions, I walked the audience through the following four scenarios based on the definition of the Command message pattern in the "Service Assisted Communications" paper that Clemens Vasters (@clemensv) at Microsoft published this February:

1. Default Communication Model with Arduino - demonstrates the default communication model whereby the Arduino provides its own API (via a Web Server adapted by zoomcat). Commands are sent from the command source to the device in a point to point manner.

2. Brokered Device Communication with Netduino Plus 2 - demonstrates an evolution from the point to point default communication model to a brokered approach to issuing device commands using MQTT. This demo uses the excellent M2MQTT library by WEB MVP Paolo Patierno (@ppatierno) as well as the MQTT plug-in for RabbitMQ (both on-premise and RabbitMQ hosted).

3. Service-Assisted Device-Direct Commands over Azure Service Bus - applies the fundamental service assisted communications concepts evolving the brokered example to leverage Azure Service Bus using the Device Direct pattern (as opposed to Custom Gateway). As with the brokered model, the device communicates with a single endpoint in an outbound manner, but does not require a dedicated socket connection as with MQTT implicitly addressing occasionally disconnected scenarios, message durability, etc.

In the final, capstone demo, “Service-Assisted Device-Direct Commands on the Azure Device Gateway”, I demonstrated the culmination of work dating back to June 2012 (in which Vasters first shared the concept of Service-Assisted Communications) which is now available as a reference architecture and fully functional code base for customers ready to adopt an IoT strategy today:

image

As a set up for the demo, I discussed the Master and Partition roles. The Master role manages the deployment of partitions and the provisioning of devices into partitions using the command line tools that ship with the code base.

In the demo, I provided a look at the instance of Reykjavik deployed on our Neudesic Azure account including the Master and Partition roles. I showed the Azure Service Bus entities for managing the ingress and egress of device messaging for command, notification, telemetry and inquiry traffic (The Device Gateway is currently capable of supporting 1024 partitions with each partition supporting 20K devices today) as well as the storage accounts responsible for device registration and storing partition configuration settings.

I also discussed the protocols for connecting the device to the gateway (AMQP and HTTP are in the box and an MQTT adapter is coming very soon) and walked through the Telemetry Pump which dispatches telemetry messages to the registered telemetry adapter (Table Storage, HD Insight adapters, etc.)

The demo wrapped up with a Reykjavik device sample consisting of a Space Heater emulator that I registered on the Neudesic instance of the Device Gateway to acquire it’s ingress and egress endpoints, initialize fan speed, rpm and begins to send telemetry messages to it’s outbox every 30 seconds (fully configurable).

The beauty of the demo is in its simplicity. Commands are received via the device’s inbox and telemetry is shared via it’s outbox. The code is simple C# with no heavy frameworks which is really key to running on devices with highly constrained resources:

   1:  void SendTelemetry()
   2:          {
   3:  this.lastTelemetrySent = DateTime.UtcNow;
   4:  
   5:              var tlm = new BrokeredMessage
   6:                  {
   7:                      Label = "tlm",
   8:                      Properties =
   9:                      {
  10:                          {"From", gatewayId},
  11:                          {"Time", DateTime.UtcNow},
  12:                          {"tiv", (int) this.telemetryInterval.TotalSeconds},
  13:                          {"fsf", this.fanspeedSettingRpmFactor},
  14:                          {"fss", this.fanSpeedSetting},
  15:                          {"fon", this.fanOn},
  16:                          {"tsc", this.temperatureSettingC},
  17:                          {"hon", this.heaterOn},
  18:                          {"ofr", this.lastObservedFanRpm},
  19:                          {"otm", this.lastObservedTemperature}
  20:                      }
  21:                  };
  22:   
  23:              tlm.SessionId = Guid.NewGuid().ToString();
  24:   
  25:  this.sender.SendWithRetryAsync(tlm);
  26:          }

 

A screenshot from the telemetry table populated by the Reykjavik Table Storage adapter is shown in the Neudesic Azure Storage Explorer below:

image

As I discussed, this is an early point in a journey that will continue to evolve over time, but the great thing about this model is that everything I showed is built on Microsoft Azure so there’s nothing to stop you as a developer form building your own Custom Protocol Adapter and this is really the key to the thinking and philosophy around Device Gateway.

It is still very early in this wave and every organization is going to have different devices, protocols and requirements. So while you’ll see investments in the most common protocols as you can already see like (AMQP, MQTT, and CoAp) the goal is to make this super pluggable and fully embrace custom protocol gateways that just plug in.

As with the Protocol Adapters, there’s nothing to stop you from building your own Telemetry adapter or to use Azure Service Bus or BizTalk Services to move data on premise, etc.

Still with me? Great. The links to my demo samples and more details on the talk are available here:

Abstract: http://bit.ly/vsl-iot 

Demo Samples: https://github.com/rickggaribay/IoT 

Oh, and if you missed the Chicago show, don’t worry! I’ll be repeating this talk in Redmond and Washington DC, so be sure to register now for early bird discounts: http://bit.ly/vslive14

Visual Studio Live Chicago Recap: Building APIs with NodeJS on Microsoft Azure Websites

0
0

My first talk at VS Live Chicago this week (if you’re looking for my IoT talk, please click here) was based on a talk I started doing last year demonstrating fundamental unit testing techniques with NodeJS and Mocha. Since then, the code and the talk has evolved into a real API currently is early alpha at Neudesic.

In this session, we started with looking at the problem – and opportunity- with long, ugly URLs and how most URL minification APIs like bit.ly, tinyurl, etc. solve the problem today.

From there, we looked at why NodeJS is a great choice for building a Web API and proceeded to build the 3 key APIs required to fulfill the most fundamental features you’d expect from a URL shortening API including:

  • Shorten
    • When I submit a long, ugly URL to the create API, I should get back a neurl.
  • Redirect
    • When I submit a neurl to the submit API, my request should be automatically redirected.
  • Hits
    • When I submit a neurl to the hits API, I should get back the number of hits/redirects for that neurl.

With the API up an running on my laptop, we proceeded to create an Azure Website and push the Node app via my local Git repository, taking it live. All was not well unfortunately as initial testing of the Shorten API returned 500 errors. A quick look at the log dumps using the venerable Kudu console revealed the cause: The environment variable for the MongoDB connection string didn’t exist on the Azure Website deployment which was quickly remedied by adding the variable to the website from the Azure portal. Yes, this error was fully contrived, but Kudu is so cool.

With the API up and running, we exercised it a bit, verifying that the Redirect and Hits APIs were good to go and the scaled out the API from one to six instances with just a few clicks.

As the API continues to mature, I’ll update the talk to demonstrate how this level of indirection brought forth by virtualizing the actual URL (as with traditional services and APIs) introduces many opportunities to interact with the person consuming the API (all via URIs!) as they take the journey that starts with the click and ends with the final destination.

Without further ado, the code and more details on the talk can be found below.

Code: https://github.com/rickggaribay/neurl

Abstract: http://bit.ly/1iEEbNV 

Speaking of which, if you haven’t already, why not register for Visual Studio Live Redmond or Washington DC? Early bird discounts are currently available so join me to see where we can take this API from here! hhttp://bit.ly/vslive14

Building a Simple NodeJS API on Microsoft Azure Websites from Start to Finish

0
0

NodeJS is a powerful framework for building IO-centric applications with JavaScript. Although it hasn’t yet reached a major version number (as of this writing, the latest build is 0.10.28), the level of developer, community, and industry support for NodeJS is nothing short of astounding. From Wal-Mart to LinkedIn, NodeJS is powering more and more of the experiences with which you interact every day.

Although there are many options for hosting NodeJS applications, Microsoft has been an early supporter of NodeJS from the beginning by making direct investments in the framework and demonstrating a commitment to making NodeJS a first class citizen on Windows, both on-premises and on Microsoft Azure.

In my new article for CODE Magazine, I provide a lap around NodeJS and Microsoft Azure Websites by showing you a simple but functional API that I recently developed from the ground up. I’ll start by discussing the design of the API, go on to reviewing implementation details, and then proceed to pushing the API live on Microsoft Azure Websites.

You can read the article here as well as on Amazon and at your local news stand.

http://bit.ly/1nT4K6h

Visual Studio Live Redmond – 8/18 to 8/21

0
0

The Goods...


Thank you Redmond, 1105 Media, Microsoft, fellow speakers and all attendees for a great show. I had a blast!

Code: https://github.com/rickggaribay/IoT
 
Code: https://github.com/rickggaribay/neurl
+++

 

I’m thrilled to be speaking at VS Live Redmond next week. The show starts on Monday August 18th and goes through Thursday the 21st on Microsoft campus in Redmond, WA.

Events in Redmond aRDSPK10 (1)re always a special treat as it gives everyone a chance to see the campus, interact with product team members and as always, meet and hang out with some of the best, most recognized speakers in the industry like Ted Neward, Michael Collier, Brian Noyes, Eric Boyd, Rachel Appel, Miguel Castro, Rocky Lhotka, Andrew Brust- the list goes on.

I’ll be delivering two Azure focused presentations on the Internet of Things and API development with NodeJS.

Since there is only so much space available for the abstracts themselves, I thought I’d elaborate a bit on what you can expect from each session in this short post. You can find more details about both talks on the VS Live Redmond website or go directly to the abstracts by following the links below.

From the Internet of Things to Intelligent Systems: A Developer's Primer

In this talk, I lay the foundation for IoT and why you, as a developer should care. I’ll show off a handful of devices ranging from Arduino, Netduino and Fez Spider and demonstrate a number of common patterns currently in the wild including default communication, brokered and service assisted. We’ll explore the challenges that exist today in supporting commands, notifications, inquiries and telemetry. I’ll then spend some time giving you an in-depth tour of Reykjavik, Microsoft’s code name for its reference architecture focused on delivering highly scalable messaging fabrics for modern IoT solutions.

We’ll take a look at the reference architecture and how it maps to components on Microsoft Azure. I’ll then demonstrate what a first-class Reykjavik device looks like and demonstrate live telemetry and commands for an end-to-end tour of Reykjavik. I’ve been spending a lot of time with Clemens and team over the last several weeks so this promises to be an inside look at the reference architecture and general shape of things you're unlikely to find publically anywhere else.

Learn more about this talk here: http://bit.ly/VSLRIOT or follow the conversation on Twitter #VSLTH04

Building APIs with NodeJS on Microsoft Azure Websites

This is a talk that I’ve been working on for several months now and continues to evolve. As I discuss in my latest article in CODE Magazine, it started off as a spike for teaching myself basic NodeJS and kind of evolved into a little project for work that needed a hosting environment. After exploring various options, Azure Websites made the most sense and this talk focuses on the key features and functionality of a little URL shortening API along with key ALM considerations like IDE, unit testing, continuous integration and deployment.

I’ll walk you through each step I took in building this API from scratch and deploy it live to Azure Websites as well as show you some really cool things you can do with the Kudu console when things go awry (as they almost always do in a live demo :-))

More about this talk here: http://bit.ly/VSLRAPI or follow the conversation on Twitter  #VSLW09

If you plan on attending either of my sessions please stop by and say hi or after the talk. I hope to see you there!

Viewing all 23 articles
Browse latest View live




Latest Images