Who do you trust with your data – Google, Microsoft or Dropbox?

For me the battle between Google Drive, Dropbox and Skydrive comes down to whom I trust with my data.  All of the solutions have very strong attributes.  Dropbox has the best and most widely adopted API’s.  Google has extremely strong integration with Google Docs and I’m sure superior search capability and Skydrive is a great value.

However, this is my data you’re talking about and in the end my data is what is important.  I don’t believe any one of the vendors are in any trouble of going out of business and they all give you the ability to have your data available to your offline so, even if there is a service interruption you should still be able to get at some version of your files and data.

The bigger question comes down to what is being done with my data and who has access to it.  All there services allow you to put access controls on your shared data but I’m more concerned at the security practices of each company.  Google is an advertising company and their terms of service has caused somewhat of a stir.  Microsoft is a just a really big target.  Their infrastructure will always be attacked but MS has done a fairly good job of promoting security in their products ever since Windows XP SP2.

The one provider I just don’t trust is Dropbox.  The one escape they had with allowing anyone access to your dropbox data is just unforgivable.  I just can’t see myself entrusting any real important or sensitive data to them.  The nature of their security issue speaks to the culture or at least lends a perception to the culture of their development team.

Who do you trust with your data?

 

Update 07/31/12:  Dropbox has another major security issue http://techcrunch.com/2012/07/31/dropbox-admits-user-accounts-were-hijacked-adds-new-security-features/

What are some of the considerations for Cloud based ERP?

Recently in one of my courses at DePaul we’ve talked at great length in class about the different ERP delivery models.  One model that seems pretty popular today is Cloud based ERP.  Provider’s such as Salesforce.com offers an ERP solution and before Oracle acquired PeopleSoft they actually offered a pseudo Cloud based solution for small businesses.  One of the primary advantages to cloud based ERP is that the associated initial investment for building out an ERP solution are reduced while providing the scalability and reliability of a large enterprise class solution.  In addition, the customer gets the advantages of reduced operations and maintenance costs and management overhead.

However, one of the perceived disadvantages to this model is the inability to customize the code to the same degree as a traditional solution such as PeopleSoft or SAP.  If a company likes %80 of the cloud solution they are normally unable to customize the application to get the fully needed functionality.  It’s a take it or leave it proposition.

This is not the only path for cloud based ERP.  The previous described model is called Software as a Service or SaaS.  Provider’s such as Salesforce.com in addition to SaaS also offer a cloud computing model known as Platform as a Service (PaaS).  The PaaS delivery model of cloud computing allows customers to build applications based on the cloud providers platform through the use of Application Program Interfaces or API’s.  Non-ERP examples of PaaS include Amazon’s data base service (Henschen).  An end user can develop an application that makes calls to Amazon’s cloud based database opposed to a SQL or Oracle database server hosted on their own servers.  One of the obvious advantages to this is the ability to scale beyond your existing capability without the associated initial investment needed to scale.

The same concept applies to ERP focused PaaS.  Salesforce.com provides API’s to the Salesforce.com PaaS infrastructure which allows organizations to build more custom instances of ERP hosted in the cloud (Sommer).  This allows for some of the advantages of cloud computing while allowing the flexibility of creating customized modules.

The PaaS approach adds some unique capability to the ERP market.  There are two different markets for users of PaaS.  There is the service provider space and then there’s the enterprise user.  From a service provider perspective there’s the opportunity to build and resale SaaS solutions based on the underlying PaaS.  An example would be Financialforce.com which built a SaaS solution on top of salesforce.com.  This allows the company to provide services to new cloud customers as well as existing salesforce.com customers by extending the capability of salesforce.com.

The other market is the end user enterprise. In class we looked at a use case involving a company EA Cake that created a tailored production method that needed a computerized support system.  The first desire would be to look toward a solution similar to traditional ERP like SAP.  However they discovered that SAP would force them to abandon some of their newly developed production processes. Intuitively you would also think that a cloud based solution would have even less appeal since it’s even less flexible.

With the PaaS approach the enterprise now can custom develop the modules necessary to run the new processes while leveraging the core advantages of the cloud.  All of the data center and infrastructure components of the solution will be operated by the cloud provider.  This infrastructure would include network, servers, database, web server and non-application security.  The enterprise would be responsible for the application layers of the solution.  This would include designing the frontend, access control of the application and workflow.

The organization would need to still do all the due diligence associated with any successful ERP implementation.  The concept of operations (CONOPS) for this approach is something that would really need to be assessed and factored into the decision.  Even, when an organization has an in-house development team and in-house infrastructure management team, the collaboration between the two groups could be difficult.  Now when you add a third party to the equation it becomes that much more complicated to mange trouble tickets, security and performance.

Also, a lot of the challenges associated with going with a cloud based solution still exist in this development environment.  You still need to consider that your application is hosted on a shared infrastructure and if that shared infrastructure is certified for your industry or line of business.  A plastics manufacture may have different considerations vs. a Federal agency responsible for providing affordable housing.

This is also a paradigm shift for many software developers.  To think in terms of cloud instances or interfaces as opposed to API’s for existing and mature ERP solutions.  However, if executed correctly this model can help an enterprise avoid a great deal of the challenges associated with rolling their own developed system and leverage the best of the common platform systems such as Oracle.

Has your organization considered cloud based ERP and what flavors SaaS or PaaS?

Think BYOD is an issue? Wait for Stealth IT

As a couple of posters have stated this is not a new problem. However, I believe the solution is for organizations to adopt a framework for expanding their data center to the public cloud. This is where solutions such as OpenStack, CloudStack, vCloud should come into play. If you asked me today vCloud is the closest solution for this problem since VMWare is so prevelant throughout the enterprise and in theory extending your infrastructure out to a vCloud provider should be an effort that is attainable by current IT staff. However, I don’t read many case studies on this being widely available on the Cloud provider side of the equation and in production. Also, most IT departments aren’t ready to manage this type of environment.

On the flip side of the coin with solutions such as OpenStack which support AWS you still need to invest a significant amount of resources in to the control panel for your public/private cloud and its operation to this point is even more complex than vCenter.

I guess the short is that these IT managers will continue to whip out the credit card and risk solving these business problems in an insecure/unsupported manner and will have to clean it up when the organization and technology mature.

Gigaom

The acronym BYOD, which stands for bring your own device, is taking over both corporate America and the press release filter in my inbox. But an analyst report out Monday suggests that BYOD has a flip side that no one talks about — Stealth IT, or the IT pro side of the consumerization trend that has swept corporate America.

There are employees bringing their own devices and apps into the workplace, as summed up by the BYOD discussions, and on the other side are IT managers taking their own credit cards (or corporate cards), grabbing company data and then playing in the cloud. Deutsche Bank notes that the issue of employees taking data and devices outside of corporate firewalls (or leaving them on airplanes) is one management headache that is getting a lot of attention and products, but the concept of Stealth IT is still ripe for new…

View original post 442 more words

Is it too late for Openstack?

Great post and not just because you quoted me. As some one who works in the now I have to help organizations decide which direction to go for cloud providers. The cloud manager is, I believe the most critical component of the solution.

Companies have to make long term decisions on what API`s to build their projects around. Computer, storage and network are all commodities that Rackspace, HP, DELL and Amazon can all provide. What`s critical is the API I design my application to use. All these companies have proven they can provide great hosting services. How many have a proven track record for providing API`s to their cloud offering?

I hope this spurs great conversation. Well done.

Gigaom

There’s been a lot of news about OpenStack recently — notably a conference dedicated to the open-source cloud-computing platform this week and IBM and Red Hat (s ibm) (s rht) signing on to the effort. And yet there is a feeling in some quarters that  it may be too late for the project to take hold.

Two years after Rackspace(s rax) and NASA launched OpenStack in part to counter Amazon Web Services (s amzn), AWS keeps getting bigger and broader — with new, increasingly enterprise-focused services coming out all the time. There also is fear — even among some OpenStack proponents — that too many cooks might spoil the effort. Sure OpenStack could become the “Linux of the cloud,” but it could also get fragmented as each vendor adds its own secret sauce to the OpenStack underpinnings. The downside scenario is that OpenStack ends up more like Unix than Linux.

Taking on AWS…

View original post 815 more words

What lessons can we learn about the cloud from Megaupload?

The Megaupload case represents one of the major challenges with the public Cloud.  The obvious issue for legal use cases for their service has been that non-infringing data is trapped in limbo along with the alleged infringing data.  One may say that a legitimate user should have seen this coming.  Megaupload’s primary use case was no secret and hosting critical data on their service was a risk.  However, what if the U.S. government didn’t trust the controls of their provider Carpathia?

I’m sure Megaupload wasn’t Carpathia’s only customer.  I don’t believe Congress really has a handle on how to enact laws that deal with the complicated relationship between cloud provider, their customers and end user.  What if the FBI had interpreted the relationship differently?  What if instead of just going after Megaupload they went after Carpathia as some are suggesting?

I’ve heard horror stories of the Feds coming in and seizing all of the servers in a hosting provider’s infrastructure.  Many hosting providers just fold at any request from law enforcement for data.  What can you do to protect your organization’s applications and data?  What relationship are you looking for your hosting provider to have with law enforcement to prevent this type of activity?  What’s the right balance?

XenDesktop 5 Provisioning Server support for vSphere 5.0

I’m a pretty big fan of XenDesktop.  It’s a slick and power VDI platform.  A while back I helped a small company deploy XenDesktop and they’ve really appreciated the migration from XenApp.  They are accessing the environment locally, remotely and via iOS devices.  It just works (for the most part).

One design consideration I had to make was if to use MCS or a Provisioning server.  The Provisioning server is a powerful application that could easily be (and was) a standalone application.  Powerful useually means complicated and Provisioning server doesn’t disappoint it is complicated.

For a small deployment MCS is a nice alternative.  It integrates well with vCenter/VMWare and gives all the functionality needed for a small environment.  Another advantage I just discovered via Citrix’ Twitter feed is that MCS supports VMWare 5.0 unlike Provisioning Server.  I’m glad I made the design choice because I could just imagine the panicked call from that manufacturing company after they upgraded to vSphere 5.

Review of PHD Virtual Backup 5.4

Sponsored Post

What is PHD Virtual Backup?

PHD Virtual Backup is a virtual server backup application that comes in two flavors – A version for Citrix XenServer and a version for VMWare ESXi.  This solution is geared toward a virtualized environment.  So, if you have a mix of physical and virtual servers you will need a combination of solutions if you are looking to backup both environments.

PHD Virtual offers a plugin for the vCenter client which allows the elusive single pane of glass for both the administration of your vSphere environment and backup.  Another feature is the ability to replicate data across physical hosts.  PHD Virtual accomplishes this by doing block level delta replication between data sets.

PHD Virtual Backup has all most of the features you’d expect in a modern virtual host backup application.

–        Block level backup to allow reduced space on your backup medium

–        Data deduplication for reduced backup time and addition disk space savings

–        File level restores of files within the guest file system

–        Backup to NFS and CIFS shares or local storage

PHD Virtual obviously adds its own take on these features we’ll focus on backup and restore.

Installation

Installation is pretty straight forward.  There’s a plug-in for vCenter and an OVF file for the virtual machine.  If you’ve deployed a virtual appliance and installed a plug-in for vCenter you will have no problems with this install.  However, I find it common with applications designed to work with either VMWare or Citrix that when you run it for the first time it asks for the “Hypervisor” address and credentials.  In the case of VMware it’s your vCenter’s address and credentials.

There are more than a couple of options for target backup resources.  PHD Virtual allows you to backup to NFS, CIFS, LUN and iSCSI targets.  Basically, any storage medium you can mount or access via the network on PHD Virtual’s VM can be used as a backup target.  It also has an exporter application that allows you to move backup files tape and yes this is still a critical enterprise need.

Note that some requirements that need to be taken into account.  PHD Virtual is using VMware’s vStorage API’s which are not available in ESXi free so you need a vCenter environment.  Also, if backing up 64-bit machines Intel VT or AMD-V is required.

Backup

As stated above PHD Virtual uses vStorage API’s to backup virtual machines.  It takes a snapshot of the target virtual disk and creates a disk based backup that’s then deduplicated.  I found this to be pretty straight forward and standard.  This is exactly one of the use cases for the vStorage API’s and will make support between the two vendors manageable.

Restore

You have basically two options for performing restores.  You can restore an entire Virtual Machine instance or individual files.  The process for restoring an entire virtual machine basically creates a newly named virtual machine in your vCenter directory.  You can of course select the target and name of the new virtual machine.  You then have to go in and verify the restored virtual machine is the desired version and manually delete and rename the restored machine.

Restoring individual files is enabled by basically restoring a VMDK and mounting that virtual disk via iSCSI and copying the files to your target directory.  This is a novel approach that works well.

Conclusion

PHD Virtual is a capable backup application for an all virtualized environment.  The single pane experience is nice for smaller environments and it’s not complicated to manage.  If you are in need of item level backups and restores for databases or mail you will need an additional backup solution.  Also, this application is focused on a virtual environment so, if you have physical servers like a vCenter server you will need an additional solution.

All in all this is a very nice niche backup application. You get a good deal of features that are simple to manage. If your use case supports it then it is a great solution.  If you are a larger enterprise you may be better served with a more general VM aware backup solution that has more advanced features.

Technology, Virtualization and Cloud Computing

%d bloggers like this: