Your ERP System is Live. Now What?

Congratulations! The ERP system you spent thousands of hours and millions of dollars on over the last twelve months is now live. If the proper implementation approach and project management methodology was applied, then this arduous task was completed on time and on budget.

So it’s smooth sailing from here, right? Unfortunately not.

There are a number of post-go-live challenges inherent in releasing a new ERP system, which will quickly derail the system’s launch if not addressed. The most common challenges are: project team burnout, limited end-user knowledge of the system, enforcing the data governance plan, and handling the myriad questions from end users.

Your organization can carry the successful implementation forward by following these steps:

1. Recognize and reward effort. The best employees from each department across the organization have spent nights and weekends building, testing and priming the system for go-live. The final weeks leading up to go-live can be the most challenging, and this same group will be heavily relied on to lead and support the organization during the first few months post-implementation.

Recognize the real, negative consequences of employee burnout and work to prevent its impact by:

  • Understanding the workload required to complete the tasks assigned to each individual.
  • Identifying supplemental resources early on.
  • Creating a clearly defined incentive plan before the project kicks off.
  • Rewarding each team member accordingly. Additional vacation days can be as valuable as cash or equity.

2. Keep training. User acceptance is a predictable challenge. Help users understand why and how the system operates to improve acceptance rates. Yes, there were a number of training sessions conducted before go-live, but those sessions alone will not suffice to prepare end users for day-to-day system tasks. More often than not, end users find training sessions held three to four weeks (a complete business process cycle) after go-live the most beneficial.

Post-go-live training sessions with participant-driven agendas are the most effective. This allows the trainer to focus on addressing the process steps and system functionality the user group is most concerned with. This approach will also increase attendance, as users will feel a sense of control over the utility of the training session.

3. Set up governance. Eight or nine months have likely passed since the process owners came together for design sessions and agreed to improve process efficiency and data quality through cross-functional collaboration. In reality, that cooperation will not be so easy to maintain once the system is live—a governance model for managing process changes must be in place.

However, the governance model should not be convoluted or overbearing. Create standard forms and workflows to approve proposed changes (e.g. new account, account structure change, new vendor, etc.) to help manage these requests. Ensure that proper security is applied and maintained to help eliminate rogue changes.

4. Create an issue resolution process. Regardless of how thorough the test plans or how diligent the testers, issues with arise post go-live. You must be prepared to manage and address them. The majority will be quick fixes, but a handful will impact closing the books or paying a vendor.

Establish a process to prioritize, resolve and communicate post go-live issues. Provide users with an easy way to submit issues and keep them in the loop as technical changes are made. Managing this proves well prevents users from working outside of the system or falling back into old routines.

Trenegy helps companies successfully manage ERP implementations all the way through go-live support. We help our clients get value of out their new system quickly and relatively painlessly. Read how to properly prepare an E&P company for implementation in our recent publication: E&P Company Systems: 4 ERP Implementation Land Mines to Avoid.

Improve Reporting Through the ERP: How to Make Better, Faster Decisions.

An ERP system is designed to connect data from all major functional areas and improve an organization’s reporting capabilities. The goal is faster, better decision making by senior management, aided by a current and accurate picture of the organization’s performance.

To achieve this goal, an organization must first decide what information it needs out of the system. Because many configuration parameters cannot be changed after system integration has begun, it is important to identify critical reporting requirements at the outset of an ERP implementation.

While there is no one-size-fits-all reporting model, there are a few considerations that will make or break the usefulness of your final reports:

1. Use the increased level of detail available with a new system. Understand the new capabilities of your ERP system and develop a reporting hierarchy that takes advantage of more precise revenue and expense classifications.

With a new ERP, many companies are able to increase expense categories from three to 15, allowing for a much more granular view of profitability. This allows an E&P company to parse out smaller expense classifications, like how much money is spent on vehicles, at each well site.

Similarly, legacy systems often limit the definitions of cost centers to units, wells and leases. A new system can expand these categorizations, giving management a comprehensive view of balance sheet activity. A completion can be recorded as such, rather than as a well. A unit can be recorded as a legal land unit instead of a grouping of wells used for accruals.

2. Set up the reporting hierarchy to support budgeting. The hierarchy in which you book and report your revenue, production and expenses should be consistent with the level you want to budget. Even if budgets are managed outside of the primary ERP system, actuals and basis for comparison will always be housed in the ERP.

Operations and accounting constantly struggle over reporting needs. Operations may want to view billable versus unbillable LOE, or operated versus non-operated status, at a field or well level, but accounting wants to see information at a higher, aggregated level in the hierarchy. With careful planning, the hierarchy can be set up to accommodate operations’ reporting needs as well as internal and external financial reporting within the same structure.

3. Consider the company’s long-term goals and growth trajectory. Ensure the ERP is set up to support growth by cleansing data before go-live. Clean master data sets a solid foundation that can sustain the burden of additional data in the event of an acquisition.

Consider the amount of history needed for reporting. Unused or excess accounts in the Chart of Accounts (COA), properties that have been sold, or wells that have been plugged and abandoned for more than five years should not be set up in the new system.

While thinking about the future may seem like a no-brainer, companies often become so consumed with supporting current requirements that future considerations and long-term growth plans are not taken into consideration. A certain level of reporting may not be needed today, but will it be needed in the future?

Companies invest in ERP systems to improve efficiency and profitability. Developing a reporting strategy prior to implementation will ensure maximum benefit and desired outputs are achieved. Trenegy helps companies implement a variety of ERP systems and develop reporting strategy that fits business requirements and supports long-term strategic goals.

Building a Resilient IT Strategy, or Just Fancy Binders?

IT executives are constantly faced with the challenge of delivering a high level of service and value to their customers while managing a tight IT budget. In most organiza­tions, the cost of IT has increased more (as a percent of sales) over the past ten years than any other administrative cost. Our research has shown, in many organizations, it takes five revenue dollars to cover every dollar spent on IT.

Among business leaders, the IT function remains the most mis­understood component of a corpora­tion’s cost structure. Most executives struggle with the CIO’s suggestions to spend millions on enig­matic items. Providing a framework for the CIO to manage and communicate technol­ogy directions, costs, benefits and stan­dards to the business is a necessary step for improving the organization’s ability to execute business strategies using IT.

To address the IT delivery model for a business, virtually all major corpora­tions and institutions have developed some level of an IT strategic plan. These strategic plans are intended to guide the IT organization’s allocation of resources and align IT with the strategies of the business. But if most major enterprises have developed IT strategies, then why are these companies continuing to struggle with managing and understanding IT costs? Why are the IT stra­tegic plans sitting on a shelf collecting dust in the CIO’s office next to a few other consulting studies on the benefits of SOA or upgrading to Windows Vista?

Strategies that Fail

The following strategies fail to meet expectations because ERP systems are not all things to all people. The resulting environments do not provide reporting and analytics, or the business process improvement capabilities promised. Because of this, the legacy IT strategy is thrown out the window along with millions of wasted dollars.

ERP Centric. Our experience has shown that most legacy IT strategies are not sustainable, nor do they address the real business issues. Instead they focus on addressing point-in-time business needs. It is common for a company to suppose a new, multimillion dollar ERP system will solve technology issues. These ERP Centric strategies are developed because it is easy to see how an ERP platform could become the rallying cry for IT to improve business results. The problem lies in the assumption that the current application environment cannot support the business strategies.

Implement ERP. ERP rarely touches specialized applications that are critical and unique to the business operations. This focus in implementing ERP often immediately eliminates alternate ways to meet strategic business needs from a technical perspective.

Jumping to ERP. Other options such as ease of use, business intelligence, upgrading current applications or improving application integration capabilities get lost in the shuffle and are not always addressed in legacy IT strategy. Jumping to ERP forces an answer and sometimes allows other viable options to fall through the cracks. In addition, large projects usu­ally move forward without any par­ticular agreement on implementation principles, change management, quantifiable success measurement or joint buy-in from operations, sales, human resources and finance.

Strategies that Work

How can today’s corporations and institutions develop an IT strategy that is sustainable, comprehensive, realistic, and part of the everyday job of the IT organization? Virtually every IT strategy begins with understanding the company’s overall business strategies, processes and pri­orities. Determining the IT implications of these business strategies then becomes the foundation. Whether corporate strate­gies lead to administrative cost reduc­tions, the need for scalability, or the need for faster IT response, the strategies need to be linked to actionable technology principles and standards.

An IT strategy focused on technology principles and standards is the key. IT principles and standards can become longer-lasting strategies for an organization. Technology initiatives alone (whether strategic or not) are not lasting and can be quickly rejected once the business case is exposed or the business changes direction.

Avoid wasted effort by establishing guiding principles that define how the IT organization should execute key processes like planning, standards management, technology deployment, support, maintenance and operations.

Once guiding principles are defined, the IT strategy becomes clear. The IT strategy can be reviewed regularly as a critical part of the planning proc­ess to provide unified direction for IT and enable a realistic budget. Each year prior to budgeting, the IT leadership team should spend time with the business, formally reviewing IT strategies, making rec­ommenda­tions for improvement and updating action plans and principles as required. If certain initia­tives are not approved in the budget, then IT strategy adjustments may be necessary. This process should drive the IT budget for the coming year and provide a balance of costs and service levels expected by the business.

Remember: Whichever IT principles, strategies and initiatives an organization decides to accept, an optimal balance between managing costs and improving value will rarely be achieved without a resilient IT strategic planning process in place.

The 6 Most Important Provisions in a Statement of Work

All parties involved in a system implementation must agree on a statement of work (SOW) before the project can begin. However, as with any lengthy contract full of complicated clauses and legal jargon, it is easy to lose sight of key terms. Omission of these important provisions can cause budget problems later in the project.

SOWs from Systems Integrators (SIs) are especially complex given the technical nature of the work. Keep an eye out for these six key points in a statement of work to avoid eventual dispute or delay:

Service Level Agreement. A detailed list of expectations for the new system should be plainly stated within the agreement. Projected report run times, data storage capacity, and system outputs (reports, metadata and spending metrics) depict a clear vision of the system’s ultimate functionality. This level of description gives all parties a tangible idea of what “finished” means.

Team Member Performance. An agreement needs to be established around the project management model. Predetermined governance ensures that service issues have an established mode of resolution. For instance, in the case that an SI’s team is not performing as expected, can either party request team member changes? There should be a structured method of replacing a team member who is not performing. Document these details within the agreement, and the project will run smoothly with the best resources available.

Hours Billed. An important clarification, and one that is easily overlooked, is the criteria for time that can be billed back to the company. It is far easier to address this issue at the outset of a project. Travel time and expenses are typically included, but it’s smart to get specific parameters for the definition of travel time. For example, are hours spent in a car or on a plane billable? Ask for an estimate of expected working hours per week for each phase of the project. Each of these items, no matter how minor, will affect the project budget.

License Details. The systems integrator should include a section detailing licensing agreements. Terms and cost of licensing should be outlined and agreed upon up front. Additional costs, like yearly maintenance and fees for future upgrades, should also be listed. Without these items, ambiguity of ownership can cause problems when an organization needs to make further system changes or updates.

Variance Agreement. Over time, changes in the business environment might necessitate alterations to the project plan. Be sure that all parties are updated on project revisions by explicitly requiring within the SOW that all changes be documented. If additional work is requested, a new work order needs to be created, signed and added to the original agreement.

Project scope. Another critical stipulation to be included in the SOW is the scope of the project. This section should list all of the vendor’s responsibilities and tasks, such as implementation/migration, testing, training and support. This list will help determine when new work orders are needed. The project scope also sets expectations for the level of support the company will provide the vendor. For example, the company needs to be clear about how many employees will work on the project and which subject matter experts can be consulted for major decisions.

An SOW without these points can leave room for disputes on payment amounts, expectations and other project details. By developing a detailed SOW, everyone involved in the project can focus on the critical path to completion. Trenegy helps companies successfully prepare for system implementation by ensuring vendor agreements are clear and comprehensive.

How to use Policy and Procedure Development to Improve Processes

Sarbanes Oxley, ISO certification, audit control deficiencies, and IPOs are all driving companies through the pain of developing extensive policies and procedures (P&Ps). Most efforts to develop P&Ps are rushed and insufficiently budgeted.

Companies overlook critical processes that require P&P documentation, or worse, create unnecessary P&Ps. Steps are often added to fix controls without considering the negative impact on process efficiency. Over time these Band-Aids create an inefficient, spaghetti-like mess that makes day-to-day activities laborious.

Organizations can take the opportunity to improve processes and controls while creating P&Ps using the following steps.

1. Break Down the Process. Process decompositions can be used to identify and map out individual steps. The steps should highlight important day-to-day activities so companies can correctly align P&Ps with processes, not vice versa.

An inventory of processes should be maintained to prioritize how future-state processes and P&Ps are addressed. Those with the highest priority typically fall within the Order-to-Cash, Procure-to-Pay and Record-to-Report mega processes. Lower priority processes may not need to be addressed before the next audit cycle if controls are monitored via reporting.

2. Establish a Vision. Create a process improvement vision to help determine the desired future state process environment. Create a team of cross-functional process owners and subject matter experts to develop the vision while considering the impact on organizational structures and systems configurations. Get signoff on the improvement vision by senior management.

Finally, facilitate future-state process development sessions and map out projected flows. An initial comparison to the COSO 2013 principles and the weaknesses report can help the team ensure all critical control needs will be addressed. Confirm with auditors and senior management. Refine as necessary.

3. Develop Tools to Define Ownership, Roles and Responsibilities. It is important to assign ownership to processes to avoid duplicate efforts, unnecessary steps and control issues. Recommending changes that reduce an employee’s level of responsibility often elicits strong emotional resistance. Reassigning a function, task or employee can be equally challenging. However, a RACI (Responsible, Accountable, Consulted, and Informed) matrix is a helpful tool to define roles and responsibilities for each process.

Creating a RACI diagram is simple. List all the process steps and map the RACI to each process owner or functional area. Only one person can be responsible (one “R” on a line) for a process to comply with Segregation of Duties. If there is more than one “R” across functional areas, either eliminate one or break the process down further.

There can be more than one “A” (person accountable) for a process, but assignments should reflect sound delegation of authority. Any duplicates should be discussed and confirmed using the COSO 2013 principles. Picking roles or individuals to be “C” consulted or “I” informed helps the team finalize realistic delegation of authority.

Once completed, the RACI will often drive organizational changes ranging from reassignment of tasks and duties, to movement of personnel and functions. A clearly defined organization chart with the new roles and accountabilities may need to be developed.

4. Confirm Impact on Systems. A combination of process flows and RACI diagrams can be used to determine the best way to configure systems to support the new day-to-day activities while providing the right level of controls. The configuration changes to support the new processes and organizational structure should be prioritized based on implementation complexity and P&P rollout schedule.

Many systems provide powerful workflow functionality that can be configured to support the new processes and controls without creating unnecessary burden. However, each workflow should be evaluated to determine the impact on day-to-day activities. Consider using reports to monitor transactions instead of implementing workflows that create unnecessary steps or slow down critical processes.

5. Bring it All Together. P&Ps are the glue connecting controls, processes, and roles and responsibilities. This step is easy when done correctly and all previous steps are followed. Procedures should reflect policies and policies should be tied to controls. Old policies should be modified to reflect the new processes. If gaps are identified, new policies should be developed.

A cross-functional team should test and refine the policies and procedures before final review with the auditors. The testing team should work hard to determine how to break, or violate, the P&Ps without getting caught. Consider using more timely monitoring tools before making significant changes to the process unless the breach would result in material weaknesses.

Once completed, the new fit-for-purpose processes, policies, and procedures should be shared across the new organization.

Trenegy recognizes the importance of regulatory compliance. Companies often struggle to comply with basic controls issues without incurring significant cost or process inefficiencies. We help our clients balance the need for strong controls without losing efficiency. Read how to properly roll out new Policies and Procedures to ensure they stick in Seven Tips for Effective Training.

I Don’t Speak IT: How to Get What You Want From Developers

When businesses turn to software developers to modify reports, workflows and general system functionality, they too often find themselves thinking, “It still isn’t right!” The truth is, developers often think in a ways that are unfamiliar to those who don’t have a technical background. If there are any holes in receiving development requests, the technical team is left to fill in the gaps by making assumptions. These assumptions are rarely correct and often result in frustration from both sides.

This same principal applies to visiting a foreign country without speaking the native tongue—Google Translate is quick and easy to use, but not 100% effective in conveying the message. Translation tools give literal interpretations, but sentence structures vary across languages and idiomatic sayings are rendered meaningless in word-for-word translation.

Businesses can avoid miscommunication and thus the burden of costly, wasted development time by following the steps below:

  1. Conduct discovery sessions. Business areas often communicate development requests in passing or between meetings, and a true understanding of the request is lost. Schedule a formal meeting to discuss new development efforts so the development team can fully understand the issue. During discovery sessions, the business should provide a visual of how data is currently displayed and how it should look in the future. Pull in a projector and walk through software and reporting portals to ensure developers understand how data is presented in the user interface.
  1. Document business requirements and establish a timeline. Following discovery sessions, the development team should create requirements documentation. Present documentation in a consistent format that outlines the purpose of development requested, business use, general requirements, business rules and an expected completion date. The business must sign off before development starts. This will ensure the technical team is not left to make assumptions, which generally results in the business paying for wasted development. Signatures show both sides understand expectations for delivery.
  1. Keep a development log. A log of development requests allows the business to track tasks currently in progress and mark backlogged items. Finalized modifications should be marked complete in the log and communicated to business users impacted by the change. Development logs are also a useful reference tool: New team members can refer to the logs to understand what standard software functionality has been enhanced along with business justification for the updates.
  1. Perform preliminary testing before releasing to UAT. It is common for development teams to only verify coding changes in the backend databases where they are most comfortable working. To be thorough, development teams should also verify that changes are displayed on the front end, where information is available to the business. Check for updates in both places to mitigate the risk of having users test updates before they are ready.
  1. Require business signoff to promote updates to live environment. After coding updates have been verified, they should be released to a clean testing area where the requester(s) perform User Acceptance Testing. During this time, the business will run through test cases and scripts to ensure the updates do not impact the full end-to-end process. The assigned tester should provide proof of the successful test run, which will indicate that the fix is ready to be promoted to the live environment.

Working with a development team can be difficult for non-technical people. IT vernacular can be as intimidating as a foreign language, but running through these simple exercises will eliminate wasted development efforts. Trenegy works with businesses and development teams to smoothly manage large-scale system implementations.

The Importance of Clean Master Data Before Go-Live

Companies spend millions of dollars to purchase and implement new ERP systems in the name of process improvement and efficiency. Yet many companies do not put the necessary time, effort and money into cleaning up master data before going-live with a new system. Master Data is a term used for data objects that are agreed upon and shared across the company, i.e. customers, suppliers, products and services. Clean master data is a term used to describe data that is accurate and properly structured within a system.

Going live with unclean master data undermines the ERP implementation in the following three ways:

  1. Data input standards only get harder to enforce after go-live. Implementing a new system is an opportunity to start with a clean slate from a data standpoint. Once a new system is live, the difficulty of going into the system and cleaning or fixing master data increases significantly, while the probability of going through this exercise decreases significantly. When purchasing a new car, most people would not take all of the trash out of their old console, backseat or trunk and throw it into the new car. The same logic applies to a new system – it does not make sense to bring in duplicate, inaccurate or unnecessary data. Take the time to go through existing data and make sure it is accurate, mutually exclusive and collectively exhaustive.
  2. Clean master data allows users to navigate the system as it was intended. A huge benefit of ERP systems is the way data and transactions are linked. These relationships make navigating the system and finding documents easier. The links also reduce the time and uncertainty associated with searching for documents and analyzing transactions. When master data is not controlled and accurate, the links break. For example, a client’s system contained duplicate vendor names — some written in all caps, some with spaces, some with no spaces — because processes and standards around master data maintenance were lacking. On more than one occasion, this client paid a vendor twice for one AP invoice, once to one vendor and once to a duplicate of that vendor. Imagine the cash flow nightmares companies have to deal with for something that can be fixed so easily.
  3. Accurate and timely reporting will be readily available to management. The most frequent complaint about legacy systems is that management cannot trust the output. Most reports from legacy systems are Excel-based and undergo a lot of manual manipulation, leaving room for keystroke errors. The purpose of implementing an ERP system is to get operations and accounting data in one integrated system so information can be pulled in real time for reporting. The reporting tools in today’s ERP systems are extremely powerful and eliminate the need for manual manipulation. However, the quality of reports is only as good as the quality of data.

The more work done on the front end to organize and cleanse master data, the more functional and accurate the reporting is. Trenegy starts every implementation with a data model and reporting strategy. By creating a blueprint of the reports a company expects from the ERP, software developers can build fields that will capture the right data from the start.

Don’t Overlook IT Infrastructure During Acquisition Integration

When planning mergers and acquisitions, it’s easy to forget about IT because most executives are focused on financial reporting and operations. It’s easy to take email access, working phone lines and software applications for granted. However, without diligent planning and project management, merging IT infrastructure can cause huge disruptions in daily business. An organization undertaking an acquisition can ensure that critical business processes continue uninterrupted by adhering to these four principles:

1. Network cutovers must adhere to a timeline. The network is the core of office communication. It is the way field employees and data communicate with the corporate office, whether it’s sending an email or transmitting production data. For example, if a foreman is trying to upload well data into ProCount, but has no Internet connection, production data cannot be reported. Without up-to-date production data, the corporate office can’t report well revenue and costs in an accurate or timely manner.

Network equipment like routers, switches and circuits, must be available and installed before the cutover can take place. The network architecture must be finalized before critical business processes, such as turning up SCADA, can happen. At this stage, it is crucial to review resource availability, internally and with vendors, to adhere to a firm deadline.

2. SCADA transfers happen in parallel with the network cutover. SCADA data, or automated production data, is some of the most important company data. Replication servers, usually found in data centers, function as a backup and must be reconfigured and tested to ensure they’re communicating with onsite servers. SCADA must be communicating with the new network before the old network is cut off. If this order gets reversed, there’s a risk of losing important data.

In an ideal SCADA world, the whole company would be on a standard SCADA system with identical system architecture and equipment between sites. It’s important that an internal resource has functional experience with the SCADA system and has the working knowledge to support and troubleshoot it.

3. Hardware updates affect the physical equipment employees will be using. Merging offices will require upsetting people’s daily routines to establish new ones. Computers need to be reimaged with new company standards, covering everything from desktop images and printer drivers to software applications like WellView and ProCount. When possible, use remote login to take inventory of applications in use at field offices. This will help determine what applications will be used going forward and if there is additional software that must be added to the company portfolio.

4. Testing is the final step in cutting over an office. Bring internal resources to branch offices to check each user’s ability to connect to the Internet, place a call and connect to printers. Face-to-face service builds relationships between IT staff and remote office employees. Onsite internal resources give employees access to immediate help should issues arise with opening or submitting data through new applications. It’s also an opportunity to provide one-on-one end user training and reference materials to employees.

Once it has been confirmed that all new systems are functioning and everyone can complete their daily activities, the office has successfully been cut over to the new network. While it may seem like small potatoes in relation to the operational and financial integration that takes place during a merger, IT integration is the foundation for bringing new employees and data into an organization. There are many moving parts in an office cutover that need to be addressed; Trenegy help organizations navigate all aspects of acquisition integration.

A Successful Cybersecurity Strategy

By: Matias Fefer and Peter Purcell

Boards are being pressured to ensure companies have developed and deployed robust cybersecurity strategies. Government oversight is increasing with the recent passage of HR 1770 by the Energy and Commerce Committee, which encourages companies to share details of all computer breaches with the U.S. government and affected parties. Many feel that HR 1770 is a precursor to supplementing Sarbanes-Oxley Sections 404 and 409 to hold board members and senior executives accountable for cybersecurity lapses.

Companies addressing cybersecurity threats face two immutable facts:

Fact #1: The IT environment will be hacked no matter how much money or effort is put into preventing cyberattacks.

Fact #2: The only way to prevent hacking is to disconnect computers from the network, disable all external ports, and prevent access by end users.

A company cannot perform business effectively on a day-to-day basis if computers are not networked. Emails need to be sent to clients, electronic orders need to be processed, and critical operational and financial information needs to be shared in a timely and accurate manner.

Companies who have successfully addressed cybersecurity concerns leverage a balanced, four-pronged strategy.

Prevention: Companies need to proactively identify and prioritize critical data or real-time control systems that need protection against unauthorized access. Penetration testing of priority areas will help determine gaps to be addressed with a combination of training, software and hardware upgrades, and security solutions.

Realistic prioritization of vulnerabilities is critical to ensure cost effective solutions are implemented. A recent study by an Ivy League college shows the most successful way to address breaches revolves around training and awareness. Cybersecurity experts say that more than 90% of breaches are a result of employees clicking links in phishing emails, infected emails from friends or cloned web sites. The remainder of breaches come through vulnerabilities in computing environments that have not been updated.

Detection: No amount of prevention will change the fact that networked computer systems will be hacked. Employees make mistakes. Companies need the right tools and resources to identify when a system has been hacked. Software tools and internal resources can be used to monitor networks and user accounts on a day-to-day basis.

Developing a relationship with a third-party cybersecurity firm is critical. Leverage the firm on a regular basis to test the computing environment and address hidden infections. Update training and end user communication based on results.

 Mitigation: A realistic cybersecurity strategy includes contingency plans to quickly address inevitable breaches. It is not cost effective for IT departments to acquire all the tools and resources needed to recover from specific cyberattacks. Hackers continually change their methods, making it nearly impossible for an internal IT department to keep up.

Work with a third-party cybersecurity firm to develop and deploy clear protocols and service level agreements for addressing cybersecurity threats. Develop and deploy a clear communication strategy with end users. End users need to know what to do and expect in the event of a cyberattack.

Transfer: A cyberattack creates significant risk given the sophistication and aggressiveness of hackers combined with the increasing reliance on computer systems. Procure insurance or migrate to a hosted environment to transfer risk. Cybersecurity insurance helps pay for the cost of mitigation and impact on all affected parties.

Using external hosting for key systems places the responsibility of mitigation on the service provider, as long as a company employee did not cause the cyberattack. The hosting provider is responsible for ensuring the core computing environment is properly updated and protected.

There are a significant number of technological solutions to address cybersecurity concerns. Many are very expensive and can limit employees’ ability to perform their day-to-day activities efficiently and effectively. However, the most effective software solution lies between employees’ ears. Strong leadership, change management, and training are critical to helping companies teach employees how to prevent cybersecurity lapses.

Matias Fefer is the Director of Information Technology for Atwood Oceanics and a leading cybersecurity strategist in oilfield services. Matias heads an offshore services consortium tackling cybersecurity issues with critical computing and equipment suppliers. He can be reached at mfefer@atwd.com.