Complex Infrastructure Documentation, Where Do I Start?

Planning for new technologies to be implemented becomes very risky if you do not really understand the potential impact of changing existing technologies to accommodate it.

Understanding the complexity of the IT landscape cannot be done by one person or even teams of people. Many individuals hold vast amounts of knowledge about specific areas of the infrastructure, but they are usually specialists and there are so many specialisms within the infrastructure environment that no one person or team can possibly have a global view of everything, and how it all links together to provide the business services it supports.

There is simply too much data, and it’s a problem that can not be swept under the carpet. In the 21st Century every organisation relies on these core systems, so it is vital to thoroughly and forensically document your infrastructure, in order to mitigate risks and be able to control, manage and plan for the continued availability of your business-critical systems.

The data (once obtained) needs to be available (with varying levels off access and permissions, to view, or administrate) in a repository (database) so it can be accessed and then filtered to show precisely whatever the user needs to see.

This could be data about the physical level showing actual equipment(s), including information on where they are, what they are called, their location in the data centre, how they are connected, but also their capacity, power consumption and storage.

It can also be a top down view or business impact analysis, showing exactly which components are linked from the top-level business service, to the applications, databases, servers, virtual servers, networks and power that connects and delivers the service.

This is made even more complex when systems and data reside in multiple data centres, branch offices, differing geographies and potentially cloud based systems.

To produce valuable insights into your business operations from trusted sources of data. Being able to find sort and filter specific data from, to produce a visualisation or diagram of the hierarchy and dependencies is now entirely possible with AssetGen.

Being able to hyperlink all of the elements in the diagram back to the database such that the user can drill down to find whatever level of underlying detail is required is also now possible.


Without good data you still will not be able to really know what shape the infrastructure is in. In reality most, organisation rely upon varied data sources in the form of local knowledge, spreadsheets, Visio diagrams, CMDB, DCIM and discovery tools, but typically there are gaps holes and inconsistencies which make the data hard to trust.

Still widely the most common form of documentation used, are labour intensive, difficult to interpret, get forgotten, are not version controlled, updated or checked.

If you have even a modest estate, you can quickly end up with hundreds indeed thousands of spreadsheets and diagrams all of which get out of date extremely quickly. Think of all the separate spreadsheets and tabs on spreadsheets to show just connectivity paths. Think of the filters you probably need to set within each spreadsheet to help find specific objects that have perhaps been named slightly differently. It simply does not scale. CMDB tools though useful are also limited in our view, as they do not have the depth of detail required for IT infrastructure planning and capacity management.

Diagrams, either CAD or Visio are specialised and labour intensive, difficult to keep up to date and automate. Discovery tools can be useful, but again they won’t find equipment which is switched off, and they cannot record the  location and logical names, or U positions in racks, the tile reference or suite reference inside a Data Centre, or record important passive equipment, like blank plates and cable management, which will impact planning (and costs) when looking for contiguous space in a DC Cabinet to populate with a new server for example.

In our view there simply is not one single “silver bullet” solution that combines all the positive benefits of each of these typical approaches to documentation provide individually. So, we encourage our customers to examine all of their sources, and once verified, we have an agreed baseline that data can be trusted and used.

Once the data has been found that is a good start. In our experience we also then find that there is rarely a systematic or logical way of describing these physical and logical elements with an agreed naming convention, which makes it very difficult to find or look up data from existing records. Typically, this is no one’s fault it is just a function of a lack of agreed standardisation of terminology.  How you designate, and name objects precisely so that they carry a unique and referenceable and searchable ID is vitally important.

Data bases will only find things in look up tables if they conform to agreed naming conventions. Without this discipled and more rigorous approach the results will not be truly representative or accurate enough to be trusted. The good news however is that this somewhat painful exercise only needs to be done once!

I.E. the lack of trusted data is the first thing that has to be recognised and acted upon. So, it may be possible to use some elements of all of the data sources previously described, but in reality, this will in all likelihood need to be supplemented with some form of physical verification and audit.

Square Mile Systems offers consultancy to help obtain rationalise and normalise your data, so it is fit for purpose.

We can help your organisation obtain those insights and can help significantly reduce operational risk (e.g. of Change) and provide valuable guidance and training in best practice methodologies to help your business understand its infrastructure and ultimately gain control and improve application service delivery.

Essentially everything I have described in this article is what we have found in practice, in pretty much every customer engagement. So, a good starting point is recognising that if things are getting out of control, you are not alone, your problem is not unique, even though every IT estate is different.

There are ways of solving what I initially described as one of IT’s most enduring problems. That is- understanding, recording, documenting and visualising the critical infrastructure that underpins your business.

At Square Mile Systems, our aim is to help our customers solve one of IT’s most enduring and difficult problems. Namely, understanding, recording, documenting and visualising the multiple layers of an organisation’s core IT Infrastructure.

To clearly define and understand which components deliver which services and how the relationships and dependencies of all these elements can impact the business should any changes take place, is an extremely complex task.

Decades of growth, the acquisition of new systems, the imperative to implement the next platform or technology, and the many bespoke applications, not to mention the temporary and potentially undocumented sticking plasters and patches applied in a piecemeal fashion by well-meaning IT individuals make every organisation’s core technology landscape absolutely unique and difficult to conceptualise as a whole entity.

Jonathan Phillips

UK Customer Services


More Posts