This is for anyone thinking of moving their Access data to a SQL Azure platform and using an Access front-end file to link to it...and maybe for those who have already gone through the process themselves and want to add their story (horror or otherwise) or suggestions. The net result for me: it's slow and prone to errors that are not always easy to resolve.
A client of mine wanted to get their Access data "in the cloud" while retaining their Access front-end which has many forms/reports. They had been using a single front-end (forms/reports) Access 2010 file linked to several other Access 2010 "back-end" files (roughly 70 tables) residing on several networked file shares. Their user base is about 10-15 and is divided between two states in the U.S. They were all accessing the SINGLE Access front-end from their 64-bit (and 32-bit) Windows 7/XP systems, with some of the users having to remote desktop to it in order to use it. Needless to say, it was very slow.
The desired target environment was a client-server setup, with each user using their own copy of a distributed Access 2010 front-end linked to the SQL Azure data.
I first migrated all the Access 2010 data files to a single, on-premises SQL Server 2008 R2 database using the SQL Server Migration Assistant for Access. After resolving a lot of table structure issues (missing primary keys, data type conversion issues, etc.) I pushed the schema to a SQL Azure database using the RedGate SQL Compare and SQL Data Compare tools. The Access 2010 front-end was modified to use VBA which, upon startup, refreshed the table links in the front-end to point to the Azure tables via a "DSN-less" ODBC connection (meaning, no ODBC file or DSN was required on the user's system or on a central file share). I also created login, password-reset and password update forms in the Access front-end which used DAO/ADO VBA routines to authenticate users and to ensure the ODBC linked tables were updated with the user's credentials.
To ensure the users were working with the latest Access front-end, I created a VBscript "updater" that the user clicked on their desktop. It first checks for a newer front-end version on a file share (based on the modified date, which is actually a bit problematic) and downloads it to their Windows user profile folder. Initially, I had the script download either a 32 or 64-bit compiled Access 2010 file (a .accde file) to the user's system, since I wanted to use the fastest-performing front-end possible. However the compiled version would not work on all of the client systems. Since each user has either a different Windows version (7, XP, even 32 or 64 bit), Access version (32 or 64-bit again, and yes, even the Service Pack applied!), the compiled front-end would fail upon startup on some systems. So I reluctantly decided to lock down the uncompiled (source) version of the file as best I can and use that for everyone.
All of this worked (well, sort of), but there were many bumps along the way. For starters, I found that each PC needed to have the absolute latest SQL Server 2008 Native Client driver installed, either 32-bit or 64-bit, depending on their OS. This was important since (more?) SQL Azure support was added to the most recent versions of the driver.
Secondly, several forms in the front-end had a tab control on them filled with subforms. This means that the main form and all the subforms were loaded when the user hit the main form. This was painfully slow. To lessen the loading time, I made the subforms initially unbound and only bound them to the record source when the parent tab was clicked.
Another issue was that of complex queries (sometimes even nested queries) which, when joining multiple tables (Azure tables) across the wire, caused huge processing delays. I took the most complex ones and converted them to T-SQL functions and/or stored procedures to offload the query processing to the server. This helped minimize the delays for specific forms, or in some cases allowed the form results to at least appear.
There are also ODBC call errors (3146 comes to mind) that pop up seemingly randomly whether you are actively using the application or not, and which require that the user completely restart the Access front-end before the linked tables can reconnect. I'm trying to develop some type of global error handler to deal with this one but not sure it will work, since ODBC is sort of outside the bounds of Access error-handling as far as I know.
The bottom line is unless you can convert all your existing Access queries to pass-through SQL for processing on the server, Access-to-Azure is slow...at least that has been my experience. For instance, using VBA to loop through all the linked tables and refresh them using the Access "RefreshLinks" method takes at least two or three minutes when refreshing them to Azure, while it only takes a couple of seconds when refreshing to my locally networked 2008 R2 instance. And I'm not sure converting all the queries to VBA-generated SQL would make the inherent Azure latency issues any more tolerable.
Good luck if you also find yourself working toward this goal.