Software runs my life

Month: September 2008 Page 1 of 2

Top 5 CRM Selection Criteria

I have now been working with CRM systems for 5 years. It is only recently that I have seen the industry  (finally) mature to a stage where it is no longer engaged in a straight up feature war. This has been driven by two things; a maturity of product offerings and a recognition by customers that they should be making decisions based on an analysis of their own requirements, rather than a feature comparison matrix. To this end, here are my top 5 criteria for selecting a CRM system:

Usability – Without this, nothing else matters. If your users will not adopt and use your selection, it’s a waste of time and effort.

Alignment – What do you want to do with your CRM system? If you are looking to manage contacts & contact activity, you’d consider a completely different slate of products than you would if you were looking to customize a product to support your entire business process.

Product delivery – SaaS vs. client/server is a big consideration. Do you need an offline client, or is a plugin enough? If so, how robust does it need to be? This could direct you toward a client/server solution. Do you have an IT department and any in-house expertise? If not, could direct you toward a SaaS product.

Integration needs – While it is easier than ever to integrate SaaS products with other systems, some scenarios definitely call for an on-premise solution. This could be a limitation of your current software packages that you rely on but have no interface into.

Pricing – Do you have capital up-front? Do you want to buy your solution? If not, SaaS products are much easier to get started with. In some cases though, they can end up costing more in the long run. There is also a great price difference in different SaaS products and even within themselves based on functionality.

Australian Bank User Interfaces

I am really focusing on user interfaces at the moment, both due to requirements at work and personal interest. It is one of those areas where the computer science ends, and the people science begins. It is very rare to come across a good user interface, in fact UI’s have reached the stage of ridicule in many cases. One of the many defences IT people use is “my software is simply that complicated”. It needs the fields, options, checkboxes etc. otherwise you are losing functionality. I disagree, you just need to understand your customer better. Have you benchmarked and user tested? There are some good podcasts and consultants out there that hammer home the importance of user testing (and I don’t mean using UAT to check for bugs!).

Getting to my point, I was recently looking at changing banks. One of my biggest concerns (after interest rates) was the user interface and capabilities of the bank’s online banking system. These days banks provide a lot of functionality online, so it is very important to me that it I get a functional yet no-nonsense interface. Thankfully most banks provide some kind of test drive, but really this doesn’t provide a detailed enough coverage.

Australian Bank Comparison MatrixFortunately PC Authority magazine has done a user interface and basic functionality/security comparison for all the major Australian banks. I have included a copy of their comparison matrix to the left. The winner was the NAB, followed by the Bank of Queensland (who prove a top Internet offering is more about a quality rather than quantity spend). Some more informal user feedback would suggest that users actually care more about the interface than the security of their online banking. Phishing and viruses regularly make the quality of website security a moot point.

These days the internet banking site is often the sole point of interaction a customer will have for months at a time. Banks should be understanding this and really giving their user interfaces a higher priority. What is the cost comparison between customer care staff training and a decent usability review? I would argue that the usability review delivers a much better ROI.

Islands of Computing Power

Amit Mital kicked off TechEd Australia 2008 today with a keynote presentation on Microsoft’s view of how software and services will develop in the future, particularly in relation to their new Live Mesh offering. There is a good summary of his presentation on the TechEd New Zealand site, it seems they got an identical opening keynote. For someone who loves networks he sure doesn’t seem to like professional networks!

There was one flow of logic which struck me in his speech. Moore’s law is still holding true, and computer hardware is continuing to double in processing power every 18 months. This computer power is also appearing in more and more locations. But when was the last time your network doubled in speed? What about doubling in speed to each additional node? This rapid processing power increase has meant two things that are obvious even today:

  1. Computers are islands of computing power – There is no seamless transfer of data between your devices. You work on a file at work, email it home, download it at home, work on it and send it back.
  2. Deploying local machines is too hard – Each branch office needs a rack, servers, backup, redundency, configuration, support, licencing…

Behind the Mesh SlideMicrosoft’s solution at a high level is the Mesh stack, the structure of which can be seen in the slide shown here. The fundamentals are that local software is fast, hosted services are convenient, so lets tie them together with an API and we get the best of both worlds. The trick is getting the balance right, where does a local application end and the service begin? How do you split the business logic? How do you provide offline access and quick sign-on to new devices? Hmmm…

Microsoft’s current practical solution is to re-write most of its server packages to allow hosted delivery. Hosted Exchange is an obvious flagship for this. Google have taken a different approach. They believe that all you should need on your desktop is Chrome, essentially an all-purpose thin client rather than a thick client on a drip feed.

So who is right? Well I am betting things will converge on a middle of the road approach. Implmenting with current technology I would say that javascript, a web browser and some sort of XML interface would be the best way to go. A few things need to develop from here:

  1. API’s need to be standardised and built into the browser (or OS as these merge). Something like Javascript libraries, but compiled, lightening fast and highly reusable. Chrome is getting there.
  2. Data transfer needs to be better than XML. Think highly compressed, encrypted on the fly, but quickly decoded into a human readable format if necessary. Microsoft’s MeshFX is getting there because it has authentication and other services built in, but it needs to be open like SOAP.

So I guess the race is on! Google will take Javascript to it’s limits, Microsoft will try to blow us away with it’s feature set. When will they sit down and standardise on the next generation of javascript and data format?

Page 1 of 2

Powered by WordPress & Theme by Anders Norén