Microsoft launches Upstream Reference Architecture Initiative
Friday, June 25, 2010
Microsoft has launched what it calls the “Microsoft Upstream Reference Architecture Initiative” together with 20 partners to date – a kind of manual for how to do IT for the upstream
Imagine if it was possible to buy a manual which would tell you how to put together information technology systems for the upstream, so you could be sure that it works.
This is roughly what Microsoft is putting together, calling it its “Microsoft Upstream Reference Architecture Initiative.”
To date it involves 20 of its partners, including Accenture, EMC, Energistics, ESRI, Honeywell, IHS, Infosys, ISSGroup, Landmark Graphics, Logica, Merrick Systems, Open Spirit, OSIsoft, Petris, PointCross, Schlumberger Information Solutions (SIS), Siemens Energy, VRcontext, WellPoint Systems and Wipro Technologies.
The reference architecture created by Microsoft and further developed by initiative partners won’t tell you exactly how to do it but gives you broad principles which you can follow which are tried and tested.
It is a bit like someone writing a standard manual for a house (after a few houses had been built and some things had been tried), which might say things like, ‘it’s a good idea to have the dining room near the kitchen so you don’t have to carry food so far,” says Accenture’s Martin Leach, Chief Architect – Integrated Oilfield Solutions, who is involved in the project.
Such a manual still leaves plenty of diversity in how houses are built to meet people’s different needs, and provides room for innovation and competition, when people develop new ways of doing it, and the reference architecture is the same.
The reference architecture is not limited to just Microsoft products, and does not exclude companies which are in competition with Microsoft – in fact, virtually every information technology service provider, whether they are providing databases, visualisation tools, storage, models or information management could fit into it.
There are no restrictions to what can be added to it. “If we have vendors that would like to implement this architecture and they happen to be on a different platform, there’s nothing to stop them,” says Microsoft’s Technology Strategist, Worldwide Oil & Gas Industries, Paul Nguyen.
Microsoft is currently developing processes for how the system will be governed and evolved. The leaders of the project are Microsoft’s Paul Nguyen and Ali Ferling, Managing Director, Worldwide Oil & Gas Industries. “More detail will be revealed on the next few months,” Mr Nguyen says.
There are plenty of clear advantages to having a system like this.
It is very useful for the whole industry to develop standard ways of doing their IT, rather than developing new systems from scratch in every company – just like we all have fairly standard ways of plumbing our houses. This makes it much easier for employees and service providers to work for different companies, because they can understand much more quickly how it all works.
Companies will be able to implement new technology with a higher degree of confidence that it will work, knowing that they are implementing systems which have all been fully tested and should work with the set-up they already have. This means that people can spend their time on the more value-adding work, such as actually optimising their production and safety.
It should make it easier for innovators to develop new tools, such as for analytics, collaboration, complex event processing, data integration, connecting devices, data storage – even entire business processes – because they know there is already a big market of companies who are ready to install the system.
Companies could compete to design and sell “processes” which run on it. For example, a company could design a process for people to work together in exploration, so the software supports robust discussion between geologists and lets them all suggest alternative views of what the seismic data might mean so they can choose the best between them, with the possibility of inviting anyone else into the discussion.
Microsoft’s Ali Ferling sees this a bit like the way automotive companies design their cars on 'platforms' – which a variety of different manufacturers can compete to make the best and highly standardised components for. Then different car models are created on top of this platform, fulfilling different special requirements. Microsoft sees its role in a similar way: providing the best platform for IT with Microsoft´s Industry partners then creating highly specialised Oil & Gas applications on top of this platform.
The system is planned to cover all upstream operations and perhaps extend later into downstream, although it is unlikely that many companies will want to implement it all at once.
Production operations is seen as the most critical area where a system like this could help – when there are complex daily decisions to be made which rely on a large amount of information.
Although all oil companies are different and do things in different ways, there are some things they all do – such as owning and managing subsurface assets, and monitoring what is coming through the wells. So systems can be developed to do these standard tasks, such as automate how a well test works.
“Every oil and gas producer has a common core set of workflows such as well test validation and production optimization,” says Michael Szatny, Landmark key product manager for DecisionSpace® for Production™. “The Microsoft upstream reference architecture recognizes similar principles as those in Landmark’s commercially-available IPO solution which enables companies to use their disparate data and preferred software applications in production workflows.”
What it covers
Vertically, the Microsoft Upstream Reference Architecture has 5 layers – (i) data sources, (i) software for specific disciplines (g+g, drilling+completions, production operations, data integration, back office ERP/CRM); (iii) data integration layer; (iv) business process management / workflow layer; and (v) visualisation / presentation layer.
Any data source could be used for the system – SQL databases, Oracle databases, PPDM data stores.
A data model, such as the PPDM data model, is only a component of the reference architecture – it is a plan for how data can be managed and integrated.
The data integration layer (ii) gathers all data sources into a single system, so it can all be worked on together.
It can cover both structured data (such as data held in specific software packages, such as reservoir modelling and surveying) and unstructured data (such as emails, documents and spreadsheets).
The role of the business process management / workflow layer (iv) (which Microsoft also calls the ‘orchestration’ layer) is partly to manage the data itself.
This layer can ensure that the data is all co-ordinated and accurate - addressing a common problem for upstream software systems, when there is large amounts of data but too unstructured or inaccurate to be much use.
It also co-ordinates the various models the company is running. For example, if an economic model is built on the results of a reservoir model, and the reservoir model is changed, the economic model can automatically update.
The visualisation layer (v) is how people actually work with the data – whether in their offices, working remotely via smart phones, or working at home. Also some people who work with the system might not be direct employees of the company.
As part of the visualisation layer, there is a core "integrated portal" where everybody can find all kinds of information, for geoscientists, engineers and managers. Once people have logged on they can see whatever they want.
Going across the company, it covers everything upstream – exploration, drilling, production and financial management.
It can incorporate high performance computing systems, via a cluster server. The architecture can use XML standards, such as WITSML and PRODML. It can include service orientated architecture and cloud computing, and social media.
With an IT architecture like this, it should be much easier to put together workflows to help answer complex questions, like how to develop a field, which involve people with different areas of expertise, and different data sources, all coming together to achieve the best answer.
For example this task might draw on information about the reservoir, information about production from different wells, information about the costs of drilling new wells and possible targets.
When staff are scheduling how to use their rigs, they can see all current drilling opportunities on one half of the screen, and information about rig availability on the other half of the screen, so they can match them together.
What we have today
Today, it is common for oil companies to have fairly evolved data management systems within individual departments, such as managing facilities, reservoir, wells and overall operations, but they do not work together well.
Different departments have their own analytic models, but the models are not connected together.
There have been many efforts to connect together systems from different departments, but normally it is on a one to one basis (a lot of effort is put into connecting one system to another system), known as a “point to point” integration.
Collaboration is also difficult because it is hard to enable people working inside and outside the company to see all the necessary data at once. For example, a seismic service company working for different companies needs a separate login for each of the different systems.
People are agreed on the need for key performance indicators to assess how well things are going, but the data to calculate them is often not readily available, and when data is available it is hard to determine the timeliness of it.
How to do it
Of course every company already has an IT architecture of some sort (so they’re not building one from scratch), but they can use this as a blueprint as they develop what they are doing, and compare this system to what they have already.
The system can be implemented in different modules, so companies can implement whichever bits of it they want.
To build it, Microsoft suggests starting with one domain process, such as doing a well review, and putting in the infrastructure, connectivity and processes to do that.
You can make it your general plan to move towards it over a number of years, or decide you are going to revamp a portion of your IT systems according to the reference architecture.
Landmark markets its own software platform, “DecisionSpace for Production,” which integrates Landmark and third party data and products, following the Microsoft reference architecture.
Landmark has been putting together workflow “solutions” for a number of years, helping companies get the data they want from different systems and making that data available.
“By having a standard system it is ultimately much cheaper and quicker for the customer to install an asset management solution,” says Landmarks’ Mr Szatny.
“This means that Landmark and its oil and gas customers can focus less on how information flows and more on how information is used and consumed within an oil and gas domain context.”
International technology and consulting company Accenture is going to use the Microsoft Upstream Reference Architecture as part of its service offering to oil and gas companies, when it offers to install what it calls an “integrated oilfield solution” for them.
Accenture focuses in particular on solutions to help companies optimise reservoir performance, well performance, facility performance, asset performance and also managing HSE (health, safety and environment).
About 18 month ago, Accenture rebuilt its solutions to work on the Microsoft stack of products, including databases, integration capabilities and visualisation technology.
Accenture says to companies first of all “we’d rather you had any architecture than had no architecture,” and secondly “we’d rather you had our architecture,” says Accenture’s Martin Leach.