Category Archives: Virtualization

Why I changed my mind about the cloud

I was very skeptical about cloud deployments for quite a while. I had seen the failed promise of application service providers (ASPs) and virtual desktops in the late 1990s and early 2000s and was very cautious about committing our company’s or our clients’ most sensitive data to “computers that belong to someone else”.

What changed my mind? I think it was primarily security and management and I remember being at an AIIM meeting in NYC (at the Hotel Pennsylvania, across 7th from Penn Station and MSG) and the speaker asking people if they thought their own security people were as good as those that Amazon and Microsoft could attract. Like all good scientists, I knew to re-examine my assumptions and conclusions when faced with new data and that comment really resonated with me.

I thought about where the vulnerabilities and issues were with self-hosted systems. How their ongoing stability often relied on heroic efforts from overworked and underpaid people. How I had started my tech career at a 2000-era dotcom and had been the manager of the team desperately trying to scale for growth, manage security and also fix email and phone issues in the office. I remembered the ops manager at doubleclick (when they were based at the original skyrink building in Chelsea) telling me how they treated their commodity servers to reboot after an error, then a reimage, then straight to the dumpster if that didn’t fix it – the earliest instance I had come across of treating servers “like cattle not pets”.

Over time, my thinking changed and I now think that cloud server deployment is the best solution for almost all use cases. We’ve deployed complete cloud solutions for ministry clients in NZ on private cloud engineered systems and on government cloud virtual servers. TEAM IM moved all of our internal systems to the cloud and gave up our data center 6 or 7 years ago – now everything is Azure, AWS, or Oracle Cloud.

Is it right for everyone? No; here are some examples I’ve encountered where it is not:

  • Insurance client that does 40+ data validations against internal (AS400) systems with every process
  • National security client managing extremely secure archival data in house (although that may change in the future)
  • Oil exploration company deploying to remote sites with very limited bandwidth (although we did some backend sync nightly).

But for most of you? Can you hire better engineers and security staff than Microsoft or Amazon? Can you afford to deploy servers around the world in different data centers? Can you afford to have additional compute and storage capacity sitting in racks ready to go? Do you operate in an environment where connectivity is ubiquitous and (relatively) cheap and fast?

Rethink your assumptions and biases. Change your mind when presented with new data. Make the best decision for your organization or clients. Good luck!

Alfresco integration with Salesforce

Back to meat and potatoes – or their vegetarian equivalent in my case.

We are working with a client to deploy Alfresco One as a content and records management platform for their business.  An important requirement is that we be able to integrate with Salesforce as that’s where their contracts are currently stored as attachments and where their workflow exists.  During the scoping process we knew that Alfresco had created a Salesforce integration app that was available on AppExchange.

However, there are some limitations and “gotchas” that are good to know about  when designing a solution around this integration.

  1. The integration is only supported for my.alfresco hybrid cloud integration.  This is driven by Salesforce’s security requirements.  If you have an on-prem or hosted Alfresco installation you will need to synchronize with the cloud extranet.
  2. The integration is really designed to be initiated from the Alfresco end rather than (as in our case) putting attachments from Salesforce into Alfresco.  The developers at Alfresco have been very helpful in giving us guidance on how to work with this, but understanding this “normal flow” would have helped us earlier in the process. Learn from my mistake!
  3. All the content from Salesforce is put into a single “attachments” folder in a single site. However, if the SF record has an account record as parent record it becomes the root for that structure and then each object becomes a child of that folder.  For example: Attachments ->ClientA->OpportunityZ                                       Attachments ->ClientB->CaseY
  4. You can use Alfresco rules to move content around if it makes better sense in your existing organization because nodes are tracked no matter where the files are moved to.
  5. All the content in the SF site will have common security, so you will have to assign security to content.  Again, the integration is built from the PoV that content is initiated in Alfresco, synced to the cloud, and from there to SF. If you are reversing that flow, things become WAY more complex.
  6. The current release of the Alfresco integration app only supports a default set of metadata for Accounts, Opportunities, Contracts, and Cases – these need to be mapped to Alfresco properties. However, we hear that there may be support for custom metadata in the next release.

Overall the integration is great if you are following the use case it was designed to address.  The documentation is good, installation is easy, and the developers have been helpful and responsive to questions. But we may need to look at other ways to extract the existing content and populate our Alfresco repository.  I’m currently looking at Data Loader as a tool to extract existing objects for import into the Alfresco instance.

(Thanks to Jon Chartrand, Jared Ottley, and Greg Melahn for their help in gaining this insight – all mistakes are mine)

WebCenter on Exalogic and Exadata

There’s currently a lot of interest in moving virtualized environments to Oracle’s engineered systems.  This is partly because they are good systems and, for organizations that can use their capabilities, provide good value for money and high performance. Partly because Oracle licensing makes it tough to virtualize cost-effectively on other platforms (looking at you, VMware). And partly because Oracle sales people are extremely motivated to sell hardware along with software.

Unfortunately, though, there is still a lot of confusion about how this might impact deployment of WebCenter on these engineered systems.  Here are a few scenarios you may come across and how to deal with them.

  • Exadata (or Database Appliance) – no impact at all from an installation point of view.  The database is still just a database from the application’s point of view and will continue to connect via jdbc.
  • Exalogic with native OEL – this is a rare configuration, but Exalogic does support install of OEL natively on compute nodes.  In this case there is no difference to installing on any other Linux OS.  Assume (and ensure) networking is handled by the Exalogic administrator because that is where the issues may arise.
  • Exalogic with virtualized compute nodes – the most common deployment.  Thestandard/supported approach is to install all the WebCenter components on virtual OEL servers as usual.  Installation of WebLogic and WebCenter on Elastic Cloud (Exalogic) is exactly the same as on a regular server. Networking can be challenging when configuring virtual environments on Exalogic, so be sure that is all worked out ahead of time. Domain configuration and data stores should be on the ZFS storage appliance.

A major value add for Exalogic is the optimization for WebLogic that is designed into the system.  All of these optimizations have to be configured on a domain or server basis, though, they are not OOTB. This is a good resource for working through the optimizations.