Most of the content will be migrated to the new blog; however this original blog location will remain in a read-only state. This will be the last post on the original blog. All new posts will be made exclusively to the new blog on the SAP Community Network.
As Eric noted in his article, this post was motivated by a discussion that he and I had about multi-tenancy around Sybase’s forthcoming SQL Anywhere OnDemand cloud database. In this post I will respond to Frank’s comments, attempting to frame the discussion in the context that the article was intended. I think we will find it is all a matter of hats. To that end, I invite you to put on my hat for a moment.
I am a product manager on the SQL Anywhere team. For those who are new to SQL Anywhere, it is an embedded relational database that has been in development at Sybase for over 20 years. Although you may never have heard of it, there is a good chance you have used it. This is because it is typically so deeply embedded in an ISV’s product that you do not know it is there. An example of an application that embeds SQL Anywhere is Intuit’sQuickBooks.
Over the last two years, we have started to see a shift in the ISV community. ISVs who previously used to deploy an application to their end users (and have it run on-premise) are feeling pressure to have a hosted version. This turns an ISVs core competency on its head. After all, ISVs have expertise in deploying software, and managing deployed software. While hosting removes those challenges, it puts the challenges of hosting in their place. This is an entirely new world for many ISVs.
To capture this shift (and other shifts within IT), many companies have launched full-stack, public platforms-as-a-service products. These are multi-tenant platforms where the resources are shared between hundreds of different applications running on the platform. The promise of the PaaS is that it will take care of the hosting, and it will scale your application automatically.
There is no such thing as a free lunch. The tradeoffs come in four places: flexibility, security, power (or functionality), and cost. Now that we are all wearing my hat (also known as the ISV-looking-for-a-platform-to-host-their-application hat), let’s take a look at the objections raised in Frank’s response.
When choosing a platform, any platform, you have entered a garden. Some of these gardens have quaint white picket fences with lots of gates and a nice breeze. Others have tall, thick, brick walls.
On a PaaS, the ISV will be limited to the physical locations, certifications (HIPAA, PCI-DSS), hardware, technologies, and terms-of-service of the platform provider. This rigidity causes problems if the ISV has current (or future) requirements that can not be met by the platform provider.
So what does multi-tenancy have to do with this inflexibility? Whether speaking of a platform or an application, it is often the multi-tenancy aspect that limits the amount of flexibility that an application or platform can support. I am not saying that multi-tenancy is bad, just that it is usually inversely correlated with flexibility.
It’s Less Secure
The question of multi-tenancy for an ISV has two facets. There is the question of the multi-tenancy of the platform they are running on, and there is the question of their ability to create a multi-tenant application.
I am not aware of any published security breaches between separate applications running on a multi-tenant platform. Each application running on the platform will likely have its own database, and so the risk of data leakage is mitigated. I believe that the platform providers will have done a lot of work and testing to ensure that separate applications on their platforms are isolated.
But the platforms do not provide any help in isolating the data within an ISV’s own application. When an ISV deploys an application on-premise, each customer gets their own instance of the application (with their own instance of the database). When the ISV pulls all of those customers together to host the application, does it make sense to combine all of the customers’ databases together into a single database?
Most of the PaaS stack’s databases have been designed to scale with the absolute size of a single database. This metric of scaling suggests that it would be best to combine all of the customers’ data into a single database. This creates a potential risk because it is possible that the ISV could introduce a bug that accidently exposes one customer’s data to another. The most likely cause of this is a coding error. (eg. forgetting to filter the data to just that customer, a bug that causes confusion of the customer identifier, etc)
Cases like this have happened in the wild (emphasis mine):
Microsoft BPOS cloud service hit with data breach
“We recently became aware that, due to a configuration issue, Offline Address Book information for Business Productivity Online Suite (BPOS) Standard customers could be inadvertently downloaded by other customers of the service, in a very specific circumstance,” said Clint Patterson, director of BPOS Communications at Microsoft.
Both of these are large companies who I expect have the resources to design and test their multi-tenant solutions, and yet both had (thankfully, limited) data breaches.
The concern of many smaller ISVs who are brand new to hosting is that they will not do it correctly. I would expect that data breaches of this nature are more common, but it is just that many smaller ISVs do not have the profile to have their breach featured in ComputerWorld or VentureBeat.
One solution for the ISV is to keep total isolation of the data between all of their tenants. One tenant, one database. The application layer may be multi-tenant, but the database is single-tenant. While some of the platforms will allow you to maintain multiple databases, it is not cost-effective, and they do not have any tools to help manage thousands of separate databases.
This is exactly the use-case for which SQL Anywhere OnDemand was designed: a multi-tenant application layer, backed by a single-tenant database layer.
I want to make it clear I am not suggesting a multi-tenant application is inherently insecure. (After all, we are enabling our ISVs to create multi-tenant applications!) Instead I am suggesting any developer should only include multi-tenancy up to the level they are confident that they can make secure. For many ISVs who do not have experience in multi-tenancy and hosting (and whose apps are already written as single-tenant applications), it may be prudent to keep the databases single-tenant.
It’s Less Powerful
As Frank points out, the platforms allow for huge improvement in productivity. I have no argument here. However here as well, there is no free lunch. The productivity gain is inversely correlated with power and functionality.
When I have to write a quick script, my language of choice is Python. I love Python. Our SQL Anywhere database is written in C/C++, with some performance critical routines written in assembly. Would it be a wise choice to rewrite SQL Anywhere in Python? No, it is not the right tool for the job.
Many of the ISVs using SQL Anywhere have very database-intensive applications. They move large amounts of their code into stored procedures in order to reach their performance goals. Moving the logic out of the database, and going through an abstraction layer (eg. Object-Relational Mappers) may not be an option for them.
This really comes down to the same conclusion as the flexibility argument. If the restrictions in functionality (which allow the boost in productivity) are acceptable to you, great. If they are not acceptable, that platform is not an option.
It May Be More Costly
As Frank asserts, this is speaking of the cost to the ISV, not the end customer. The reason for this is that many ISVs are smaller shops who do not have the bandwidth to fully re-architect their application to fit the constraints of a platform. They need their application hosted, and they needed it done yesterday.
Many of the ISVs that I have talked to plan to accomplish by doing it in stages. The first stage is to move the existing application and database up a hosting provider, and use remote desktoping technologies to remote the application’s GUI to the end-user
I can almost hear an audible groan of disdain from cloud purists:
“You can’t do that! The application must be totally re-architected in order to take advantage of the cloud.”
That is true, but pragmatism is holding the trump card. Don’t let the perfect be the enemy of the good!
For many of these ISVs, the end goal will be to re-write as a “cloudy” application (and thus reap all of the cost savings to both them, and their customers), but the direct path may not be the most cost effective.
Now let’s take off my hat, and put on the Enterprise-end-user hat. To understand the reaction wearing this hat, I invite you to read Frank’s blog post.
As Frank points out, when the original post is read as an enterprise end-user (or even an enterprise developer), a lot of the arguments do not make any sense. This is because enterprises and ISVs are different beasts.
An enterprise knows its requirements. It knows what local data centers it will need, and it controls all of its end-users.
An enterprise does not have to consider what would happens if a new customer appeared in a country that had strict data laws, and there was no data center for your platform located there.
An enterprise does not have to consider that they might suddenly find out their enterprise application needs to be HIPAA compliant because they were able to score a new customer in the health care space. (I am not saying they would not ever have to be HIPAA compliant, but they would be better able to plan for it).
It’s Less Secure
The question of multi-tenancy within the application is meaningless here. All the data in that application is for that enterprise. There no risk of having your enterprises’ data accidently exposed to another enterprise due to your programming or configuration error.
It’s Less Powerful
An enterprise is in control of all of its users, and is able to limit functionality by mandate. An example of this is IT departments that often mandate “This is our list of supported browsers”, or “This is our list of supported devices”.
Most ISVs are not in a position to make mandates to their users. If they cannot support a certain feature, they lose customers. That customer will not care if the excuse is, “My underlying platform does not support that.”
It May Be More Costly
From time to time, enterprises need to do overhauls of their applications. While these are disruptive, there is nothing you can do except grit your teeth and wait for the disruption to pass.
It is much harder for an ISV to tell their customers:
“We have to do a major internal rewrite. This means our next release will contain almost no new features, and will probably be late.”
(In reality, ISVs still actually have to do this, but they try to mitigate it by doing it in smaller chunks.)
There were other use cases in the market beyond those that were being met by SAP OnDemand offerings on which I usually concentrate (OnDemand Core, OnDemand Edge, SAP NetWeaver OnDemand, etc)
The SaaS market is varied / more complicated than many assume.
I think this hits the nail on the head. The original post was targeted at the group outlined in his first point.
The second point is a good reminder for me. I spend so much of my day wearing my own hat (after all, it is comfortable), and I failed to anticipate how these ideas would be interpreted if read wearing a different hat. I apologize for the confusion it has caused.
This past week I attended the Cloud Connect 2012 show in Santa Clara, California. In addition to attending the talks, I was also staffing our Sybase booth where we were exhibiting our new cloud database for ISVs, SQL Anywhere OnDemand.
Although I was not able to attend as many sessions as I would have liked due to traffic at our booth (a high quality problem ), there were a few interesting themes that I noticed emerged during the conference.
Emphasis on Private Clouds
I attended this same conference last year, and I recall that last year had a strong focus on the public cloud, and public cloud providers. This year, I felt there was a shift to a strong focus on private clouds. For example, one of the opening keynote was delivered by Allan Leinward from Zynga. He explained how Zynga has made a shift from being 80% public – 20% private, to 20% public – 80% private over the past year. (more on that story here).
I also found that the talks had a greater focus on the private cloud. For example, scanning the titles of all of the sessions reveals six titles that contain “Private”, but only one that contains “Public”. Going hand-in-hand with this, there also seemed to be greater number of security and governance talks compared to last year.
Lastly, it seemed there were a greater number of private (or hybrid) cloud providers exhibiting in expo hall. In fact, one notable exception in the public cloud space was Amazon Web Services. This was especially noteworthy because they were a major sponsor of the show last year.
My take-away from this is that as the cloud is becoming more “real” for businesses, the concerns around security are forcing them to think more carefully about the public cloud/private cloud question.
Importance of Utility Billing Models
A few of the keynote speakers spoke about the types of enterprise applications that are moving to the cloud. They noted that CIOs are often that last to know what cloud application and platforms are being used inside their own enterprises. In fact, they suggested that often the best way to find out what cloud applications and platforms are being used, is to examine middle-manager’s expense reports. Why?
Because most of these clouds offerings require no capital expenditure, and only a relatively modest operating expenditure, they are well within the expensing limits of a middle-manager. Adoption of cloud technologies in an organization is bottom up. By offering a cap-ex free, utility model for your product or service, you enable middle-managers to circumvent their own IT staff (or at least, keep them ignorant).
My take-away from this is that any product or service that will be offered in the cloud should have a cap-ex free, utility option. This will allow front-line workers to start using it in a production environment, while only requiring approval from a manager with a company credit card. The hope is that once IT finds out about it, it will be so well entrenched that IT will need to find a way to support it.
Plan for Failure
It would seem that some of the high-profile cloud outages and large-scale data breaches in the past year have sobered people to some of the realities of the clouds. I think this is a natural change in focus as the cloud starts to mature. As more applications are moving to the cloud, there are more people experience the new challenges that it brings.
The part I found most interesting is listening to the different strategies to deal with it. The most common solution was to, “Just accept that it will happen.” By accepting that failures will happen, and that they do not just happen in exceptional circumstances, you will be better prepared to deal with it.
Putting this theory into practise, I heard a few mentions of companies who have created a equivalent of Netflix’sChaos Monkey. A Chaos Monkey does exactly what it sounds like: causes chaos. It is a process that runs against a production site and randomly kills processes, machines, network connections, etc. The idea is that if failure is a daily occurrence, there is no reason to fear it. Sounds scary, but it seems to work.
All in all, it was a good show. It will be interesting to see what the next year brings for the cloud, and what this show will look like next year.
The views and opinions expressed in this post are my own, and do not necessarily reflect those of Sybase, an SAP Company.
The cloud puts an opaque level of indirection between you and your data. And, of course, this is the whole point. The cloud creates a virtual world of servers and disks that is seemingly both everywhere and nowhere all at the same time. However, this virtual world is superimposed on the real world. And in the real world, location matters. To illustrate this, we are going to go back a few years before the word “cloud” was such a buzzword.
This caused a lot of concern for the University’s professors who believed that this move broke their right to private communication as stated in their collective agreement. Why? Because the service would be hosted in the United States, the Canadian-based professors’ data would fall under the domain of the U.S. Patriot Act. Professors were concerned that private communications could be scanned by the U.S. government, and lead to them being denied access to the United States (or worse) without any reason given.
In 2008, the faculty association filed a grievance against the University. The case was eventually brought before an arbitrator. In June 2008, the arbitrator determined that the University did have the right to use the Google service because the wording of the collective agreement was not clear on whether or not the same privacy requirements was extended to email. The arbitrator concluded his decision with (emphasis mine):
While I am sympathetic to their plight and the fact that big brother could be watching over their e-mail communications, it simply brings to the fore the caution advanced by Mr. Pawlowski when he commented upon e-mail systems generally before the Senate. One should consider e-mail communications as confidential as are postcards.
It is clear that location mattered in this case. In the end, the University was allowed to make the move, but only at the acknowledgement that their email was not, in fact, private.
If you are an ISV that is looking at creating a hosted application for your customers, the location of the data will matter. To your customers, your service will exist “in the cloud” and will be accessed over the internet in very much the same way as the professors accessing their email at Lakehead University.
But data is the life-blood of any organization. Just like the professors at Lakehead University, you should not be surprised when your prospective customers start asking tough questions about where their data is hosted. You also need to think about how the location will affect your ability to service customer who fall under specific policies such as HIPPA, Sarbanes-Oxley, or PCI-DSS.
Fortunately, SQL Anywhere OnDemand Edition “Fuji” is here to help you by giving you flexibility in many aspects of your deployment, including the location of the data. It is not a service hosted by Sybase, but rather a piece of software that you can take and run wherever you want. SQL Anywhere OnDemand Edition “Fuji” allows you to hook up multiple machines together to make a cloud that may be spread across multiple physical locations. For example, you may have some of your cloud in a more expensive HIPPA-approved data center for those customers who require it, and in a cheaper data center for those who do not. Similarly, you may have data centers spread out across multiple countries, allowing you to store each customer’s data in disks that are physically located in that country.
What makes SQL Anywhere OnDemand Edition special is that you can connect all the data centers together into a single cloud and mange them all from a central console. By setting declarative rules on what databases can end up where, you can tell your customers exactly where their data is stored, and even let them visit your data center (I was surprised from talking to ISVs just how often that this last one is a requirement).
Location matters, even in the cloud. In the Lakehead example, the location argument was trumped by the fact that the data they were storing was considered as public as postcards. I doubt your customers will be satisfied by that answer if you are unable to provide them the location of their data.
The challenge vendors face when trying to market and sell something for “the cloud” is that the definition of “the cloud” is so broad and varied.
I would add that I think this broad and varied definition is the fault of those same vendor’s marketing departments. It is an unfortunate reality that you can count on all vendor’s marketing teams to seize the technical buzzword de jour and co-opt it to fit whatever they are selling.
Disclosure: I am a member of the Sybase marketing team.
As a result, the word “cloud” conjures up connotations of cost savings, opportunity, risk, loss of control, flexibility, and general anxiety all at the same time. I think this is what motivated Chris’ assertion:
Ultimately, behind every “cloud”, there are real people managing real machines. What is marketed as a “cloud” is really a rack of machines, with a very real person who has to keep them running. To that person; the administrator, the “cloud” isn’t “in the cloud”, it’s in his own data center! The administrator must put together a set of machines, software and administrative tools that enable everything to be viewed in a completely hands-off way by the users, so that they think of it as a “cloud”.
It wasn’t until I read this post that I realized that the broadening of the term “cloud” has conflated two related, but distinct concepts. Specifically, “Cloud” and “cloud computing” are not the same thing. “Cloud” refers to the place things are running. “Cloud Computing” is a set of technical characteristics.
I believe Chris’ post is dealing with the first concept, where the “cloud” is, and the fact that at the end of every cloud metaphor is, “a rack of machines, with a very real person who has to keep them running.” However, that doesn’t say whether or not that real person with the rack of machines is utilizing cloud computing concepts to help them get the most out of those machines.
This then begs the question, is Fuji really a cloud solution? Or rather, does Fuji exhibit the characteristics of cloud computing? After all, if Fuji were nothing more than co-location of databases on a rack of servers, then there would not really be anything “cloudy” about it. This topic came up very directly when we launched Fuji in September. During analyst briefings, I was asked to defend how Fuji could be considered a cloud, and not just co-located hosting. In essence, we were being asked to defend that we were not guilty of the buzzword hijacking previously described.
Fuji is a cloud, and I aim to prove this by showing that it exhibits all of the characteristics of cloud computing. As we have already established, it is hard to get a good definition of cloud. As a result, I am going to rely on the wisdom of the crowd, and use the characteristics listed on the Wikipedia article for Cloud Computing. For each characteristic, I will explain how Fuji achieves it:
Agility improves with users’ ability to re-provision technological infrastructure resources.
Fuji provides this agility in two forms. The first is that you can dynamically add and remove computing resources to handle variable workloads. The second is that you are able to flexibly move databases amongst the computing resources to help achieve better throughput, and prepare for bursty workloads.
Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs.
Communication in Fuji works in this manner. Fuji uses a OData, a RESTful-like interface for communication between machines. This API is currently exposed through the Cloud Command Utility which allows actions in the cloud to be scripted.
Cost is claimed to be reduced and in a public cloud delivery model capital expenditure is converted to operational expenditure. This is purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
Although pricing has not yet been announced for SQL Anywhere OnDemand Edition, it would be a candidate for utility pricing as described here.
Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
The databases that run inside of Fuji can be accessed from a large number of platforms, architectures, and devices. These can range from desktop clients, to mobile web browsers. Furthermore, the Fuji infrastructure can be accessed and managed from any Flash-enabled browser.
Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
Peak-load capacity increases (users need not engineer for highest possible load-levels)
Utilization and efficiency improvements for systems that are often only 10–20% utilized.
I consider this one to be the most important characteristic, and Fuji exhibits it. The whole goal of Fuji is to allow multi-tenancy. By using the agility characteristics described above, Fuji delivers on centralization of infrastructure, peak load capacity, and utilization and efficiency improvements.
Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
Fuji allows your computing resources to be spread across any computing resources that have internet connectivity to each other. This can even include running across multiple IaaS providers. Fuji allows databases to set up with multiple copies in a high-availability setup that can act to achieve business continuity and disaster recovery.
Scalability and Elasticity via dynamic (“on-demand”) provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads.
Fuji scales with the number of databases that it is running. It is a trivial task to dynamically add and remove databases. As mentioned under the “agility” section, additional computing resources can also be added dynamically to help achive this scaling.
Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
All of the computing resources that make up Fuji are constantly monitoring their own performance, and using web services to communicate that to all the other computing resources. With this aggregate knowledge, Fuji is able to better achieve on the peak-load capacity and utilization and efficiency improvements mentioned above.
Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford.However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users’ desire to retain control over the infrastructure and avoid losing control of information security.
In my opinion, this is the biggest win for Fuji. By keeping each tenant’s data totally isolated from each other, Fuji provides a very strong security model. Furthermore, the ability to host Fuji yourself on a rack of machines within in your sole control can add an extra level of security. This is partially what Chris’ post was addressing (the “where” part of the cloud).
Maintenance of cloud computing applications is easier, because they do not need to be installed on each user’s computer.
Maintenance of the cloud is accomplished though a centralized, web-based console. There is no need to visit each machine, and nothing (apart from a Flash-enabled browser) needs to be installed on a machine to administer from it.
Fuji is a cloud in the sense that it exhibits all of the characteristics of cloud computing. This does not mean you have to run Fuji in the cloud, nor do you have to think of it as a cloud. If you would prefer think of it as a data management tool for your private rack of servers, it will not disappoint you. If you would rather think of it as a data cloud layer over your raw computing resources, it will measure up.
It was exactly one month ago yesterday that we announced our new product, SQL Anywhere OnDemand Edition, to the world at Techwave is Las Vegas. SQL Anywhere OnDemand Edition is a data management solution that enables ISVs to build, deploy, and manage cloud applications without compromise, letting ISVs take advantage of the cloud’s economies of scale, while giving them the tools to ensure they can still treat each of their customers individually.
In order to download the beta software, you will need to register for the beta program. You can register for the beta program here. After registering, we will send you an email with a link to the download, and your software key.
We will also automatically create an account for you on the SQL Anywhere OnDemand Edition “Fuji” Beta Forum. Your username will be the everything before the @ symbol in the email address that you used to register. As an example, if you registered with the email address John.Doe@email.com, your username would be John.Doe. Your default password is password. You may change your username and password after logging in for the first time. Please feel free to ask any questions about the beta in this forum.
Please note that there was an error in some of the initial registration emails that informed registrants that their forum username was their full email address. This was incorrect. Instead, your username is everything before the ‘@’ symbol in your email address. I am sorry for any inconveniences this caused.
After a lot of preparation and work, Techwave 2011 has come and gone. However, this was a special Techwave for the SQL Anywhere team because we unveiled our brand new project, code-named “Fuji”, to the world. “Fuji” is a data management solution that enables ISVs to take business applications to the cloud without compromising either their needs, or their customers’ needs.
There is a lot of more information available about “Fuji” on the official beta site, and I invite everyone to go take a look. While there, sign up and pre-register to be notified when the beta software becomes available.
For today’s post I would like to focus on one particular aspect of the Fuji, and that is the flexibility that it gives ISVs in choosing a hosting provider. This was made even more clear to me after reading an article that appeared in The Register on September 2nd entitled Apple’s iCloud runs on Microsoft and Amazon services.
The gist of the article is that The Register is reporting on a rumour that Apple is planning on hosting its iCloud service across both Amazon Web Service and Microsoft Azure. While I do not want to comment on the veracity of this rumour, I did find the reasons the article cited for this fascinating. I have quoted the most interesting parts below:
By selecting two suppliers, both very different in their services and their level of maturity, Apple is reducing its risk of becoming hostage to a single supplier.
The iCloud data is being striped between the Amazon and Microsoft clouds. That means Apple or Microsoft or Amazon or all three have to implement through the software a way of identifying which user’s information is stored in what locations and then to route requests to the correct server.
If the data is duplicated, then software would handle load-balancing or randomly send user’s requests to one cloud or the other, or change access policies depending on things like network speed and server availability.
The challenge in running two clouds under an overall service, if there is one, will be in smoothly managing a unified system where the controllers could well be running on different operating systems or be written in different languages.
The benefits to Apple in this setup are very clear. They are not hostage to a single hosting provider, and they have balanced the risk because it is likely that service disruptions between both hosting providers will be independent events. As the article said, the challenges of this architecture include handling the duplicated data, performing load balancing, and smoothly managing the unified system that may be running on separate operating systems. This is all well and good for the company with the largest market capitalization in the world, but how would you do this for your application?
This is exactly what Fuji’s flexibility is designed to let you do. A single Fuji cloud can span over multiple data centers and hosting providers. Furthermore, it can even span over multiple operating systems and bitnesses. All that is required for a machine to become part of a Fuji cloud is that it is running either Windows or Linux, and has network connectivity to all of the other machines that make up the cloud. That is it!
Fuji will automatically do the work of letting you create copies of the data across other machines, and keep them up-to-date as changes are made. When a new connection is attempted, Fuji performs load balancing by redirecting the connection to the least loaded machines to run the queries. Lastly, Fuji allows you to manage all of the databases, servers, and machines that make up your cloud from a single, unified, management tool.
What does this mean for you? Well, it means you can do with your own application’s data what Apple is rumoured to be doing with their iCloud service. For example, we have had some customers tell us that they would rather use local hosting providers, rather than some of the big hosting providers like Amazon or Rackspace. This is because they want to be able to visit the servers that are hosting their data and talk to the operators face-to-face. But, they do not believe that local hosting providers are able to give the same level of SLA as the “big providers”. To mitigate this risk, they want to be able to run their databases across multiple local hosting providers. Fuji lets them do this.
Furthermore, there is very little risk of hosting vendor lock-in because of humble requirements needed to run Fuji. The cloud space is currently immature and ISVs are afraid they may not have picked the best hosting provider. Fuji allows ISVs full flexibility to move to any provider that is able to supply them with either a Linux or Windows machine instance, and network connectivity. By using Fuji, ISVs can be sure they are making bets that will give them the flexibility to respond to changes in the hosting market as they arise.
We can’t all be Apple. But by using Fuji, you can get some of the same benefits; making Fuji the data cloud platform ‘for the rest of us’.
This one is a little fun for a Friday: implementing Conway’s Game of Life in SQL Anywhere 12. The game of life is a simple zero-player game where an infinite plane of cellular automatons live and die according to some simple rules. In this post, we will create a full version of this game (with GUI), in a single SQL statement. [Read more →]