There are several ways to integrate with Salesforce that I am currently aware of:
- Outbound Messages
The Salesforce API allows you to create, query and edit records within Salesforce in all the ways that you would expect. This is a strong option for integrations that are either asychronous, batch or driven by events on another system.
The Dataloader lets you move data in both directions between Salesforce and JDBC/ODBC connections, databases, CSV files etc and can be invoked through a UI or command line application. This is great for quickly building batch operations.
Outbound Messages are used to invoke webservices at certain points in an object’s workflow (creation, approval etc) and are well suited to synchronous integrations with third party systems driven by events within Salesforce.
The use case I was looking into when I performed this integration was to create a Jira ticket for a production team to work against when an order came in so Outbound Messages was a good fit. In the end I went down a different route to satisfy the requirement as described in my last post but performing this integration was a good learning experience and I hope it will be of some use to someone.
At the Salesforce end I built a simple application roughly following the instructions provided and added an Outbound Message containing all the fields on a Custom Object, triggered by a simple workflow with one step (create).
The main hurdle I had to get over was the fact that Salesforce sends outbound messages as SOAP requests, this has several disadvantages:
- The messages include named fields so need to be edited every time you edit the custom object they relate to.
- You need to build a SOAP webservice to consume the messages rather than cobble together a web hook.
- You are given a WSDL file and have to build a webservice to conform to it so you need to either use a tool that can reverse engineer a SOAP server from a WSDL file (such as the .net svcutil) or manually create a webservice that is an exact match (no mean feat due to the inflexibility or automation in some frameworks).
- Salesforce supplies a WSDL contract which has strongly typed fields mapped to each property of the custom object so if you add or remove properties you will have to rebuild your webservice.
In order to circumvent these issues I opted to not use a SOAP framework but work with the raw request using an XML parser and return a manually constructed SOAP ack message. This allowed the script doing the integration to be decoupled from the message content, except for the target Jira project which it expects to find in a property called ‘Project’.
The integration into Jira was very straight forward thanks to the excellent jira-python library.
Initially I found the choice of XML parsing libraries for Python bewildering but once I decided that SAX parsing would be over kill and looked at how actively they were maintained ElementTree seemed a strong choice with decent XPath support.
There are a lot of web frameworks for Python but since all I needed was close to the wire access to the request and response cgi got the job done with no fuss.
This script is obviously not production ready – it doesn’t check the messages come from Salesforce and has no error handling but if anyone builds it out or ends up working on something similar please get in touch as I would love to be involved.
At FMG we are in the final stages of a Salesforce deployment and are currently feeling our way around various additional roles it can fulfil within our business, most recently order fulfilment. Once an order has been placed with a sales person it needs to be handed over by them to the production team, this is a process that to date has been managed via Google Docs and Dropbox making it ripe for improvement. There are several apparent options:
- Manage the entire process in Jira
- Integrate Salesforce with Jira so sales staff use Salesforce and Production staff use Jira
- Manage the entire process in Salesforce
Having been a software developer for many years I think of Jira as the natural place for all production activities to take place and have been treating it as such with great success so far but this new use case involves sales staff who live in Salesforce and have no license, knowledge or aptitude towards Jira, this rules out option 1.
To work out how to approach option 2 I initially evaluated the three plugins available to integrate Salesforce with Jira:
- Wikidsmart CRM this looks very fully featured if complex but does not support the hosted version of Jira which makes it a non starter for me.
- Go2Group CRM Plugin again, this looks reasonable but only works with self hosted Jira instances
- ServiceRocket Connector for JIRA this supports Jira but having spent an afternoon reading the documentation it gradually dawned on me that its huge flexibility makes it more of a framework for Apex developers than a quick integration tool to keep Jira tickets and Salesforce objects in sync.
I am comfortable building custom apps in Salesforce but to go through the overhead of picking up a domain specific language I would need a major series of requirements affecting many users and no other option, for this reason I prototyped a python script to listen for Outbound Messages fired by a Salesforce Workflow and raise tickets in Jira. Doing this was not too painful but made me reflect on the further requirements of this integration – specifically the ability for the production team to bounce tickets back to sales for clarification and the ability for the sales team to see the stage of fulfilment an order is at.
A tight integration between two workflow engines and ticketing systems that can be altered by business stakeholders will always be complex and brittle and for that reason both the custom integration and highly configurable off the shelf connector are bad options, ruling out option 2.
Through a process of elimination I have arrived at option 3 meaning fulfilment teams that interact directly with sales staff will live in Salesforce while those abstracted by business analysts and project managers will live in Jira. The only thing left to do now is negotiate license fees with Salesforce…
In 2013 Gartner predict device shipments as follows:
|Device Category||Devices shipped in 2013 (millions)|
These figures put tablet shipments at an eigth of total phone shipments. Gartner also say:
In the first quarter of 2013, smartphones accounted for 49.3 percent of sales of mobile phones worldwide. This is up from 34.8 percent in the first quarter of 2012, and 44 percent in the fourth quarter of 2012. On the other hand, sales of feature phones contracted 21.8 percent in the first quarter of 2013.
From this we can infer that tablet shipments for 2013 represent between a quarter and sixth of smart phones for the same period.
Based on these numbers tablets are a far smaller market than smart phones, but this still only tells part of the story. According to readwrite approximately half of tablets are primarily used at home rather than out and about. This tells us that of the tablets available only around half can actually be considered ‘mobile’ devices. All of this leaves tablet users between an eighth and twelfth of your smart phone users.
All this means that if you are building a service that is truly mobile you really want to build your strategy around smart phones first and foremost. As an extension to this conclusion I would point out that iPads are behind desktop/notebook shipments so even when targeting them you may well be better off simply letting them use your conventional desktop web app/site until you establish a clear need.
Obviously these conclusions apply more to some sectors than others and ‘second screen’ apps that are counterparts to tv shows, highly design focussed or games will have a very specific set of priorities overlaying any based on device shipments.
Outsourcing transactional and marketing email delivery and SMTP as a service – a strategic review of SES, Pure360, Mailchimp, Mailgun and Mandrill
There are two categories of emails sent out by most organisations:
- Marketing – sent to a segment of the organisations mailing list to promote a service or product (possibly third party)
- Transactional – sent to an individual user regarding a transaction that they are involved in. e.g. hanging password, adding content, interaction from another user
Pure360 and MailChimp provide excellent turn key SAAS products for sending marketing emails aimed at the Enterprise and SME sectors respectively. Transactional emails have always been more complicated and are generally managed through internally hosted SMTP servers frequently installed as an afterthought alongside web servers. As volumes of email increase deliverability issues become more costly and harder to manage across multiple servers, to this end I am in the process of evaluating Amazon SES (simple email services), Mandrill (recently launched my Mailchimp) and Mailgun (recently acquired by Rackspace).
Based on 1 million emails of 50Kb each
- Amazon SES $105 per month
- Mailgun $407 per month
- Mandrill $200 per month
- Send grid $800 per month (for this reason I’ve not looked into them any further)
Mailchimp and Pure360 pricing is not directly comparable as it is based on list sizes and tends to be negotiated at this scale.
All three integrate via proprietary RESTful APIs or SMTP with authentication.
The ability to implement either service by configuring IIS and Postfix SMTP servers to relay email through it has several advantages:
- very easy to run a pilot
- eliminates downtime during migration since emails will simply queue
- applications are not dependent on synchronous web service calls to send emails but can communicate with collocated SMTP servers
- eliminates vendor lock-in entirely
Integrating via the RESTful APIs involves an extremely high level of vendor lock-in on a technical level due to vendor specific APIs and commercial/product level due to vendor specific features, but quickly delivers functionality that would otherwise have a massive implementation cost.
This is the point where I realised that I’m comparing apples and oranges.
Mailgun is a broad product that provides an API to manage mailing lists, receive incoming emails and fire restful event handlers, track the lifecycle of emails and report.
Mandrill provides an API similar to Mailgun but goes further offering template management via the API and through a web interface that also includes rich pre-built reporting tools.
Pure360 and Mailchimp offer complete email marketing solutions in the cloud with integration options.
Amazon SES is a fire and forget SMTP server that lets you do limited aggregate reporting on outcomes (Deliveries, Bounces, Complaint, Rejects) – this is true SMTP as a service with few frills.
SES, Mailchimp/Pure360 and Mandrill/Mailgun are positioned differently on the cloud services spectrum. SES is Infrastructure as a Service (IAAS), Mandrill/Mailgun are Platform as a Service (PAAS) and Mailchimp/Pure360 are Software as a Service (SAAS) and all three approaches to email delivery have their benefits.
IAAS leaves you with the ultimate flexiblility and the lowest transactional costs and is suited for:
- legacy apps or very simple new builds due to lack of migration/implementation costs or lock-in when implemented via SMTP
- applications that require extremely complex and bespoke features or workflows
- extremely high volume environments
PAAS has a strong and flexible feature set with higher transactional costs and total vendor lock-in and is suited for:
- complex transactional email based applications
- self branded SAAS email services
SAAS has minimal implementation costs (a simple two way sync), marginal costs only slightly higher than PAAS, minimal vendor lock-in since you can always integrate with additional providers, a strong feature set but minimal flexibility so is a good fit for:
- quick, low cost implementations and pilots to prove markets
- offering clients integration with a third party service to provide high quality email marketing facilities with limited options for direct monetization for a negligible cost
- providing in-house email marketing solutions
The PAAS option here has quite a narrow range of applications and I would be very worried about either reinventing the wheel and regretting not going SAAS or lacking flexibility and incurring high costs and so regretting not going IAAS.
If anyone has any recommendations of other providers please get in touch.
At FMG we have a requirement to store a very large number of files (~1 Million) with reasonable turnover (~1000 per day) and have them available to third parties. Initially the lead developer used Rackspace Cloudfiles but found it was too slow so we reverted to storing them on disk, not a long term approach due to costs. It’s now time for me to make a decision and in order to do so I’ve written a quick console app to test Windows Azure Storage, S3 and Cloudfiles .Net SDKs.
The app works by authenticating and then using the Parallel.For to upload a 7kB file repeatedly measuring in ten second chunks how many times it has been uploaded, I then repeated this test (on another day at a different time) with a 700kB file.
To double check the result for Rackspace I wrote a quick Python script against the Pyrax library and had pretty much identical results.
The results are striking:
|Provider||Average Files Per Second (7kB)||Average Files Per Second (700kB)|
For small files S3 is very fast, Azure is not as quick but still fast enough for the application I have in mind, Rackspace cloudfiles is truly dire. For larger files the spread is much smaller but S3 is still the fastest, coming in 30% faster than azure and 100% faster than Cloudfiles.
I am well aware that these SDKs are simply a front for fairly simple restful webservices and it’s quite possible that there are threading issues in the Openstack and Pyrax SDKs that could be fixed or I could implement my own library but I don’t have the inclination or time to do so if Amazon and Microsoft are providing me with something I can use out the box.
The libraries I used are Openstack.net and AWS SDK for .NET.
My .net test application is available at andrewhancox/cloudspeedtest on Github and the less comprehensive Python application is on GitHub Gist, please get in touch if you see anything I’ve missed.
Raw Results 7kB file (files per second):
Raw Results 700kB file (files per second):
Recent events at my workplace have made me look at the hosting market afresh and I have come to some conclusions that are new to me.
When you purchase hosting what are you really buying? The online service market has been split into infrastructure as a service (IAAS), platform as a service (PAAS) and software as a service (SAAS) and this stratification is happening in hosting as much as anywhere else across IAAS and PAAS but I believe there is an additional layer beneath this that is best avoided.
When you buy managed hosting you are paying for a lot of things that you probably don’t care about:
A specific, supported hardware platform – My application requires a given amount of memory, disk space, IO throughput and CPU but beyond that I have no opinions – a perfect example of this is a ticket that just came to me warning there is an issue with my SAN switch. When I buy servers, storage etc I am paying for parts to be stocked to meet an SLA and troubleshooting expertise for when it goes wrong, if I go down the public cloud route I’m benefiting from an economy of scale that makes these costs negligible and removes choices I don’t want to have to make.
A specific, supported, instance of an operating system – How long does your software take to deploy? If you don’t know this then you need to do a rehearsal. If you get an operating system level issue on an application node and it takes under an hour to build a new one then it’s not worth troubleshooting, just spin up a new one. As for relying on restores of virtual machines or (god forbid) bare metal OS backups, these take time to perform and can fail at many points in interesting and commercially devastating ways. Managed hosting providers make a huge amount of money from your terror of going out of your depth with your operating system without providing a solution any better than having a tested set of build scripts and someone who can quickly provision VMs.
Network infrastructure – load balancers, firewalls and the like can support a huge amount of throughput, I have worked with environments that have doubled in scale without requiring upgraded infrastructure, showing they were over-specified in the first instance. Public cloud providers either write this cost off as negligible at their scale or have more granular pricing levels than £xk for an Cisco xxx, as well as taking complicated decision making off your hands.
Customer service – With a managed hosting environment you need a very responsive provider who can respond to requests to open ports, provision new servers etc very quickly and accurately but with the management tools that public cloud providers offer you can typically make the configuration change in the time it would take you to raise a ticket for someone to do it for you, leaving aside the time it takes for that ticket to get picked up. In addition to this you are paying for someone to have the knowledge to telnet to a firewall and type commands rather than enjoying the simple self service tools public cloud providers economies of scale allow them to deploy.
This line of thought has left me believing that the traditional hosting providers can be thought of as selling expertise, choice and service itself as a service and I don’t believe it’s efficient or strategically desirable to be buying services at this layer, particularly when the public cloud has made them largely irrelevant.
A very interesting recent development in this market is Azure launching support plans for their public cloud (http://www.windowsazure.com/en-us/support/plans/) which repositions their offering as one that can be deployed by an organisation who’s hosting team would, in the past, have been dependent on ‘Fanatical Support’ from a traditional provider.
It will be much more efficient to buy into support at this level to supplement an internal team since you have the solution is predominantly self service and you will never request support on any hardware or hardware/OS interaction issues. This efficiency is clearly reflected in pricing since the models I have run show Azure + support to be substantially cheaper than traditional hosting. (I can’t share these figures due to confidentiality agreements)
I needed to add a group to a role within every project on a Jira instance and since there is no built in way to do this I wrote a quick python script. To run it you will need python installed – use macports to do this – and jira-python that you install with the following two commands:
sudo easy_install pip
sudo pip install jira-python
I’m no python expert so the script might not be particularly idiomatic.
Install wget if you don’t have it.
yum install wget
We need to install EPEL and REMI, package repositories that have a wider variety of more up to date software than centos are prepared to support.
Download the RPM files that will install the packages:
Use yum to install them:
sudo rpm -Uvh remi-release-*.rpm epel-release-*.rpm
We now need to alter the yum config for the remi repo to enable it (EPEL is enabled by default):
sudo vim /etc/yum.repos.d/remi.repo
Within the [remi] section set enabled to 1
Get the latest erlang repo definition and put it with our other yum repos:
wget -O /etc/yum.repos.d/epel-erlang.repo http://repos.fedorapeople.org/repos/peter/erlang/epel-erlang.repo
yum install erlang
Add the rabbitmq repo:
rpm –import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
Download and install rabbitMQ:
yum install rabbitmq-server-3.0.4-1.noarch.rpm
Set rabbit to run on boot:
chkconfig rabbitmq-server on
Set the IP address to bind on – I’ve got multiple NICs and I only want it to listen on one
RABBITMQ_NODE_IP_ADDRESS = 192.168.56.30
service rabbitmq-server start
Follow instructions supplied at http://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/
Set a specific IP to bind to:
add a line:
bind_ip = 192.168.56.30
Remember to restart the service (service mongod restart)
Open the port in IPTables:
add the following line before blanket reject statement to allow port 5672 on eth1
-A INPUT -i eth1 -m state –state NEW -m tcp -p tcp –dport 5672 -j ACCEPT
-A INPUT -i eth1 -m state –state NEW -m tcp -p tcp –dport 27017 -j ACCEPT
Instructions derived from http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-networkscripts-interfaces.html
We’ll configure two interfaces, one for public traffic and one to communicate with the host, this allows you to have a known IP for the host to talk to the VM on for SSH, Web etc, while allowing for the host using DHCP on it’s NIC.
If you don’t already have a host only network set up in virtual box then go to menu VirtualBox->Preferences, network tab and create one, edit the settings – the adapter tab is the network settings for the host, the virtual NIC that your computer will use to participate in this network, give it a memorable IP address and set the subnet mask, ignore the DHCP tab as we’ll set static IPs for the VMs
On the VM configuration give it two network adapters, one NAT, one host only – pick the network you created earlier.
Edit /etc/sysconfig/network-scripts/ifcfg-eth0 and change the ONBOOT line to yes
Create a script for the second NIC by copying the first one
cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1
Edit /etc/sysconfig/network-scripts/ifcfg-eth1 change:
BOOTPROTO to static
Add IPADDR and NETMASK settings with values that fit with your host network
Change DEVICE to eth1
Set HWADDRESS to the MAC address listed in the VM network config for the second adapter.
service network restart (to pick up the new settings)
Edit the hosts file on the host and vm machines (vim /etc/hosts) and add an entry for the VM with the DNS name you’ll be using
yum install openssh-server
set SSH to start on boot: chkconfig sshd on
service sshd start
You should now be able to ssh from the host to the vm
By default Centos has iptables in and running and only ssh open so open port 80 by:
iptables -I INPUT 1 -p tcp –dport 80 -j ACCEPT
/sbin/service iptables save
service iptables restart
Once you’ve done this once I’d advise you to clone from it in future and change MAC+IP addresses to save yourself time.