Imagine a nirvana where software lifecycle integration just works. A place where an intricate ecosystem of best-in-class tools for software development and delivery runs seamlessly and its users benefit greatly from the steady flow of real-time information. Despite being a constant hub of activity, it’s also a place of calm — a Zen environment for everyone involved in the toolchain.
Every team — from testers to developers to PMOs to business analysts and PPMs — are in sync. Thanks to the end-to-end integrated workflow, everyone in the value chain has the visibility and traceability required to work on the project to the best of their abilities. Productivity is optimized and IT initiatives are driving their organization forward, helping them to consistently deliver high quality products and services to their customers.
At the heart of this nirvana are APIs. In this fantasy, APIs provide developers with all the essential information they need to make two endpoints connect. They possess this information because the vendors built their respective tools with integration in mind, ensuring to include detailed documentation to help external developers to feed the repository into their internal API.
If only this nirvana existed. The reality is integration is one of the hardest technical processes that an organization can face. It’s an all-encompassing job and APIs have a starring role that significantly influences the outcome.
Now, using a tool’s APIs is the best and most stable way to access the information stored in the tool’s underlying database. APIs facilitate access to the endpoint’s capabilities and the artifacts that they manage, and they can also enforce business logic that prevents third parties from unauthorized activities.
However, while APIs are a critical piece of the integration puzzle, they also highlight the delicate intricacies involved in the integration process. Many of these APIs were actually created for the vendor’s convenience in building a tiered architecture, not for third party integration. They were not made with a consumer in mind; an afterthought if you will.
As a result, these APIs are often poorly documented and incomplete:
- Model objects don’t necessarily work correctly together
- Data structures, how and when calls can be made and the side effects of operations are all often excluded from the documentation
- Poor error handling and error messages are common
- Edge cases and bugs are rarely documented
- Some APIs aren’t fully tested e.g. some tools may return success even when all charges aren’t made
- Some APIs have unexpected side effects/behavior e.g. updates that result in delays for changes to appear
- Some APIs have inconsistencies between versions e.g. different vendor endpoints to retrieve tags
- Because they’re not documented, figuring out how to handle these issues requires a great deal of trial and error. And sadly, often the vendor’s customer support staff is unaware of many of these issues and how to use their API, so finding resolution often requires access to the endpoint vendor’s development teams
So what does this all mean exactly? Consider a kitchen for a second; the pantry is full of ingredients (APIs) to make a recipe (the formula for the integration), but without correct labelling (documentation of the APIs), we have no idea of what they are, their expiry date, how best to use them etc. Any attempt at cooking an integration will likely end in disaster.
What’s worse, these APIs can change as the endpoint vendors upgrade their tools. Depending on how thoroughly the vendor tests, documents and notifies users of API changes, these changes can break the carefully crafted integrations. For SaaS and on-demand applications, these upgrades happen frequently and sometimes fairly silently.
So any API-based connection is little more than just glue holding together two systems — a temporary and unreliable measure. There’s no maintenance or intelligence built into the tool to ensure the systems are continuously working together. In a software world that faces a relentless barrage of planned and unplanned technical changes and issues, such a brittle integration is unacceptable. Your software develop team will suffer, as will your overheads and the value you deliver.
With that in mind, we need to find a way to label the APIs and gain a better of understanding of how to use them collectively to create first class integrations. The first step is always to do an exhaustive technical analysis of the tool:
- How is the tool used in practice?
- What are the object models that represent the artifacts, projects, users and workflows inherent in the tool?
- What are the standard artifacts and attributes, and how do we (quickly and easily) handle continual customizations such as additions and changes to the standard objects?
- How do we link artifacts, create children and track time?
- Are there restrictions on the transitions of the status of an artifact?
- How do we use the APIs?
This analysis can be very time-consuming, especially when you factor in poor documentation and design flaws (in the context of integration). And what at first appear to be pretty simple tasks actually turn out to be surprisingly hard. For instance, ServiceNow has 26 individual permissions to go through — no quick or easy endeavor. The results of any analysis should reveal the knowledge discrepancies and highlight how the lack of information hampers the possibility/quality of the integration.
By now, you probably have a fair idea that using APIs to create an integration takes a herculean amount of effort behind the scenes. And trust us, that’s only the tip of the iceberg. We’ve spent over a decade building up an encyclopedic understanding of the SLI and the education never stops.
Fortunately, we’re fully equipped with the right brains, technology and processes to stay at the vanguard of the market, using domain expertise and semantic understanding to create robust large-scale integrations that grow with your software landscape.
We will also be discussing in detail the huge challenges involved in software lifecycle integration tomorrow (Tuesday, January 31st) during our special live streamed event, TasktopLIVE. You can find out more about the event here.