Technology Strategy
Building a Resilient IT Strategy, or Just Fancy Binders?
by Peter Purcell
IT executives face the constant challenge of delivering a high level of service and value to customers while managing a tight IT budget. In most organizations, the cost of IT has increased more (as a percent of sales) over the past 15 years than any other administrative cost. Our research has shown, in many organizations, it takes five revenue dollars to cover every dollar spent on IT.
Among business leaders, the IT function remains the most misunderstood part of a corporation’s cost structure. Most executives struggle with the CIO’s suggestions to spend millions on enigmatic items. Providing a framework for the CIO to manage and communicate technology directions, costs, benefits, and standards to the business is necessary to improve the organization’s ability to execute business strategies using IT.
To address the IT delivery model for a business, nearly all major corporations and institutions have developed some level of an IT strategic plan. These strategic plans are intended to guide the IT organization’s allocation of resources and align IT with the strategies of the business.
But if most major enterprises have developed IT strategies, why do these companies continue to struggle with managing and understanding IT costs? Why are the IT strategic plans sitting on a shelf collecting dust in the CIO’s office next to a few other consulting studies on the benefits of SOA or upgrading to Windows Vista?
Strategies That Fail
The following strategies fail to meet expectations because ERP systems are not all things to all people. The resulting environments do not provide reporting and analytics or the business process improvement capabilities promised. Because of this, the legacy IT strategy is thrown out the window along with millions of wasted dollars.
ERP-Centric – Our experience has shown that most legacy IT strategies are not sustainable and don’t address real business issues. Instead, they focus on addressing point-in-time business needs. It is common for a company to suppose a new, multi-million dollar ERP system will solve technology issues. These ERP-centric strategies are developed because it’s easy to see how an ERP platform could become the rallying cry for IT to improve business results. The problem lies in the assumption that the current application environment cannot support the business strategies.
Implementing ERP – ERP rarely touches specialized applications that are critical and unique to business operations. This focus in implementing ERP often immediately eliminates alternate ways to meet strategic business needs from a technical perspective.
Jumping to ERP – Other options such as ease of use, business intelligence, upgrading current applications, or improving application integration capabilities get lost in the shuffle and are not always addressed in legacy IT strategy. Jumping to ERP forces an answer and sometimes allows other viable options to fall through the cracks. In addition, large projects usually move forward without any particular agreement on implementation principles, change management, quantifiable success measurement, or joint buy-in from operations, sales, human resources, and finance.
Strategies That Work
How can today’s corporations and institutions develop an IT strategy that is sustainable, comprehensive, realistic, and part of the everyday job of the IT organization? Virtually every IT strategy begins with understanding the company’s overall business strategies, processes, and priorities. Determining the IT implications of these business strategies then becomes the foundation. Whether corporate strategies lead to administrative cost reductions, the need for scalability, or the need for faster IT response, the strategies need to be linked to actionable technology principles and standards.
An IT strategy focused on technology principles and standards is the key. IT principles and standards can become longer-lasting strategies for an organization. Technology initiatives alone (whether strategic or not) are not lasting and can be quickly rejected once the business case is exposed or the business changes direction.
Avoid wasted effort by establishing guiding principles that define how the IT organization should execute key processes like planning, standards management, technology deployment, support, maintenance, and operations.
Once guiding principles are defined, the IT strategy becomes clear. The IT strategy can be reviewed regularly as a critical part of the planning process to provide unified direction for IT and enable a realistic budget. Each year prior to budgeting, the IT leadership team should spend time with the business, formally reviewing IT strategies, making recommendations for improvement, and updating action plans and principles as required. If certain initiatives are not approved in the budget, then IT strategy adjustments may be necessary. This process should drive the IT budget for the coming year and provide a balance of costs and service levels expected by the business.
Remember: Whichever IT principles, strategies, and initiatives an organization decides to accept, an optimal balance between managing costs and improving value will rarely be achieved without a resilient IT strategic planning process in place.
3 Ways to Give IT a Seat at the Table
by Peter Purcell
IT executives are rarely given a seat at the corporate table when strategic business decisions are considered. In most cases, IT is not viewed as a strategic business partner by internal leadership. We recently researched the IT environments at various companies. After talking with more than 200 IT executives, we have determined three ways to engage IT as a strategic partner.
1. Establish a Governance Model
Companies have a history of overloading their IT department with implementing and supporting new technologies to support growth. How your IT department handles these new projects depends on their inclusion or exclusion from the corporate table.
Non-strategic IT executives react by acting as a project traffic cop when presented with project requests. They often say projects will have to wait until IT capacity becomes available. Leadership then considers IT the “no” police and executes projects around IT.
Strategic IT executives never say no to projects. The strategic IT organizations establish a clearly defined process for working with leadership to plan, cost, evaluate, approve, prioritize, execute, and complete projects. This process allows leadership to prioritize projects based on realistic costs and benefits. The governance model includes a virtual IT Steering Committee—leaders across the business who work together to manage this process on a regular basis.
Underneath the governance model are strong IT processes that define decision making and prioritization. For example, strategic IT creates a clearly defined intake process for demand. The intake process quickly determines whether a support request is an enhancement that should be treated as a new project or a quick fix. New projects are driven through the governance model while true support requests are addressed based on priority, but never ignored.
An effective governance model allows IT to work with leadership to ensure critical projects are executed on-time and within budget. The IT governance model should allow the business to better understand IT and more importantly, trust IT’s advice.
2. Create the Right Support Structure
IT organizations should be structured to balance support (keeping the lights on) and development (implementing new technologies) activities. Non-strategic IT tries to do it all in house and creates a large structure where neither are done well. Development projects become delayed and over budget, and the IT executives are dragged into day-to-day support issues.
Strategic IT starts with perfecting the support model where IT almost becomes invisible. This might start with moving certain applications to the cloud where system downtime is nearly eliminated, or considering a BYOD (bring your own device) model for phones. On-demand IT support technologies can also be implemented to make IT more responsive to support needs. Once the support model is perfected, IT executives can focus on the “net new” and grow into a strategic business partnership role.
3. Collaborate to Increase Efficiency
Collaboration is an overused term and is the gray area between “control” and “concede.” Non-strategic IT either concedes to all leadership’s demands or takes control-of-systems away from the business. Leadership either receives everything asked for, or IT seems to have protected the business from itself. Leadership then considers IT irrelevant and operates around IT.
Strategic IT leverages the governance model and support structure to clearly understand what drives revenue growth. For example, IT staff visit with operations personnel, shadow sales people, and talk to suppliers and customers to determine how systems are supporting revenue generating activities. IT and the business work together to better leverage existing systems to support day-to-day operations.
Tactical business issues are proactively resolved, and new solutions are identified to plug gaps and increase efficiency. Collaboration between IT and leadership ensures the portfolio of IT systems provide maximum value to the business.
What Is Bimodal IT? Do You Need It?
by Michael Critelli
A well-known IT research company publishes yearly rankings of the top business technologies on the market. In a way, they are the Yelp of the IT systems world. They do an excellent job of rating technologies, but unlike Yelp, every five years the IT research company tries to step out of its comfort zone and advise companies on different IT management strategies and processes.
That would be like Yelp creating a new business strategy for your restaurant. Yelp is great for helping restaurant owners understand customer feedback on service and food. However, Yelp does not help a restaurant owner manage, hire, or increase profits. Over the years, this IT research company has introduced various terms and concepts to manage an IT department, but most, if not all, have fizzled out after two or three years.
“Bimodal IT” is one of these strategies. It’s defined as “the practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility…”
Huh?
Does this mean two different IT departments? Sort of. There are pros and cons to each approach.
Mode 1
This can be called the slow, safe mode. This is the traditional way of managing IT. Emphasizing safety and accuracy when handling problems, projects, fixes, or patches, this mode focuses on managing “systems of record,” such as accounting, banking, human resources, and payroll. The focus here is on minimizing risks and ensuring that these systems are always available and performing well.
Mode 2
This is more of a fast, risky, non-traditional mode. IT is supposed to employ agility and speed when approaching problems, projects, fixes, or patches. The focus is on managing “systems of engagement,” such as websites, platforms, vendor portals, and CRM. This approach encourages experimentation, taking calculated risks, and being ready to change direction if an initial idea doesn’t work out.
Further Explanation
Imagine if Yelp advised restaurants to split their menu and kitchen into two divisions:
1) Good Food that Takes a Long Time
2) Weird New Food That Comes Out Quickly,
This would allow the customer to decide which irritant they would rather experience. At first, this doesn’t sound so bad, because with a split kitchen, your new staff can focus on getting their food out quickly while the experienced staff can focus on food quality. Unfortunately, customers will suffer, as no customer should have to sacrifice one good thing for another.
For older and larger companies, IT is becoming naturally divided between traditional systems of record and new quick-to-implement systems of engagement. These new systems are implemented and applied in quick rapid spurts and emphasize low maintenance while the older, larger systems tend to take years to be applied and require specialists to manage them.
Is Bimodal IT Necessary?
The bimodal IT model recommends that instead of wasting time trying to combine the two ideas or throwing the old way of IT out, they should be run as separate groups with completely opposite objectives. Implementing a bimodal IT model could, in the worst case, double IT staff with one half of personnel to maintain the current systems and the other half to implement systems of engagement. It’s costly. Is it necessary?
The IT research firm argues that leading IT organizations follow this model. But while it may be true that some very large organizations have evolved into a model that is close to bimodal IT, it’s not the best practice. In reality, it’s more common for large organizations to keep their older systems, or legacy systems, and simply bolt on additional software as needed.
True market leaders do not compromise agility for stability but rather are constantly improving and setting the standard for speed, safety, and quality. These leaders take the agility part of bimodal IT and implement it companywide by using collaboration and prototyping. With technology having a bigger and bigger impact on all functions of the business world, it’s important for executives to understand their IT group and how to maximize performance while minimizing costs.
Bimodal IT is not an optimal goal for most companies, yet key concepts can be used stepping stones to achieve a high-performing IT group.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
4 Reasons Why Corporations Are Losing in the New World of Platforms
by William Aimone
We have all witnessed countless examples of new technology platforms turning traditional industries on their heels. Uber has wrecked the taxi industry. Amazon has shuttered big box stores. Expedia has boarded up travel agencies.
What industry is next? Realtors? Consulting? Energy? Changes are coming quickly.
The burning question is: why aren’t traditional corporations at the forefront and winning the platform race? Traditional companies have a clear advantage over the upstart. Traditional corporations have money, an existing base of customers, customer data, and the staff. Not to mention the physical assets and infrastructure to manufacture, service, and deliver to the customer.
So why are traditional corporations losing to the new guys on the block? What can they do differently?
Assign the Right Champion
Too often, corporations hand the platform challenge over to the CIO. It’s technology, right? The CIO approaches the challenge not unlike the last ERP implementation, gathering existing requirements and implementing the system that meets most of them. The system is not as user-friendly, and adoption is met with resistance. This approach is problematic for several reasons. By definition, a platform eliminates friction. The purpose of a platform is to make the connection between buyers and sellers easier, with little to no resistance. Platforms are built with ease-of-use and engagement—not a list of requirements—as the primary driver. Building a platform begins with mapping and understanding the customer’s journey, which is a fundamental starting point. Platforms don’t start with technology, so punting the development to the technology guys is a big mistake. Find an internal champion who is forward-thinking and intimately knowledgeable of the customer (no prior technology experience required). Most importantly, the champion should be able to select their team out of the business rather than asking the business to elect people.
Pretend the Corporation Does Not Exist
Corporations tend to build platforms around what already exists or what is known within the organization. The MVP (minimum viable product) becomes over-engineered. Most of the time and effort is wasted trying to integrate the platform with the legacy systems and the data inside the company. Traditional corporations have an array of products, inventory locations, transportation mechanisms, etc. It is very easy for a company to fall into the trap of trying to mirror the data and information already existing within the company into a new platform.
Dell Corporation was one of the first traditional companies to jump on the early e-commerce bandwagon. How did they get there first? Their e-commerce platform started with a simple customer interface. All customer product and order data was manually entered in their internal systems for processing. Building a platform for collaborating with external parties should start with integrating the customer experience—not the internal technologies. Internal integration can come later once the platform is proven to attract customers.
Get the Board Excited About It
Entering the platform world is not for the faint of heart. The corporation will want to get endorsement and encouragement from the board. Why? Because it’s that important. The long-term costs associated with launching a platform will likely be well within the purview of your board’s spending approval. More importantly, the strategic benefits will be enormous for the corporation.
Once the rest of the organization realizes the new platform initiative has the board’s support, every executive is encouraged to contribute in a positive way. Every board deserves to hear about the fun and exciting stuff the corporation is doing. The endless discussions on cybersecurity and Sarbanes-Oxley controls are not what get board directors excited about the company they represent.
Eat or Be Eaten
In every industry, there’s an entrepreneur out there with a new platform idea already in the works. The concept has been developed, but the entrepreneur might be struggling to get the traction needed to take the platform viral. The platform may need to integrate with a larger existing company’s distribution, manufacturing, or service network, or the entrepreneur may just be out of funds. Why not buy out the entrepreneur or join forces with them? Large corporations are well poised to enter the platform economy. They already have the legal, technical, financial, and human resources to accelerate an entry into a platform model. Then, the corporation will at least have a starting point with a concept to build upon with their industry expertise. Or the corporation can follow the Whole Foods model and wait for the platform to eat them up with an Amazon appetite.
5 Tips for Building the Right Multi-Sided Platform for Your Business
by Evan Lambrecht
Commerce as we know it today is in the beginning phases of a paradigm shift. The traditional pipeline business model, where producers have provided a product or service and pushed it through a channel down to consumers, is being replaced. A new business model known as the multi-sided platform is gaining traction in the market. The multi-sided platforms are directly connecting the mass of buyers and sellers in a single electronic marketplace.
Amazon is strangling traditional retail stores. Uber is decimating the taxi industry. Airbnb is keeping hotel chains on the edge of their seats. Most companies recognize the new economic trend, but awareness is only the first step in a long and complicated process to shift the traditional pipeline model toward utilizing platforms effectively. Traditional companies do not know all it takes to successfully implement platform technology. The companies are throwing more money into projects that do not produce the desired results. The lack of understanding of how multi-sided platforms work and create value is causing failures and delays.
What Is a Multi-sided Platform?
According to “Platform Revolution” by Parker, Van Alstyne and Choudary, a platform is defined as a business model which brings together producers and consumers to facilitate the exchange of information, currency, and goods or services. The platform marketplace generates value by connecting communities using technology. The connection reduces friction by enabling mutually beneficial transactions to take place between two or more parties without a middleman.
Another key element of many successful platforms is the use of crowdsourcing. Crowdsourcing is the method of gathering either funding or resources from outside the company, usually through various channels on the internet. By leveraging third-party resources, platforms can remain nimble and scalable while keeping fixed costs low. Platforms have virtually rewritten the book on valuation by de-linking assets from value. This unique approach has allowed companies such as Uber and Airbnb to receive appraisals in the tens of billions of dollars without owning a single car or home, respectively. Combine crowdsourcing with the concept of positive network effects—the idea that the more people use the platform, the more valuable it becomes—and we begin to understand how multi-sided platforms are achieving unprecedented levels of growth and success.
Where Do We Start?
One of the biggest misconceptions about successful platform technology is that you must completely rewrite your business plan, hire expensive software engineers, and move your company to Silicon Valley. This misconception is simply untrue. Many companies in traditional industry segments leverage platform technology to increase efficiencies and reduce friction without changing their overall strategy. Moreover, new platforms are popping up across the country.
Below are five tips for building the right multi-sided platform for your business:
1. Understand the user journey
Many projects nosedive from the start due to failure to focus on the most important aspect of any multi-sided platform: the user journey. Before development begins, a platform must clearly articulate the specific type of end-users it is targeting and the overall experience it wants to deliver. The definition can be achieved by creating two crucial items: user personas and journey maps.
- User personas: Creating a series of user personas will clearly define the type of people the company envisions will be using the platform. For example, one persona for Uber might be Jeff, a 21-year-old biology student who needs a ride home after he’s had one too many at the bar. Another persona could be Sharon, a 30-year-old project manager who lives in Chicago without a car and needs to get to and from work every day. By understanding their customers, developers and marketers can better tailor technology to the demographic target market.
- Journey maps: Journey mapping is the process of visualizing and understanding a user’s journey toward accomplishing a goal while using the platform. Journey maps are like process flows but with more emphasis on the user experience. Journey maps are used to identify user needs, emotions, and pain points before development begins.
2. Develop wireframes
First impressions and creating a clean, inviting interface is important. In many cases, user interface is more important than the actual functionality of the platform application. Wireframes are skeletal outlines of a website or application that depict exactly how every screen and step in the process should look once the final product is up and running. Wireframes are an essential piece of the planning phase because they communicate the overall aesthetic and usability to developers, marketers, and potential investors. Wireframing often requires the enlistment of graphic artists to ensure high quality.
3. Scope the MVP appropriately
Minimum Viable Product (MVP) is the first functional version of a new piece of technology. While you may have grandiose plans for how your platform will ultimately work, a trimmed down version allows a company to start with something small instead of spending years trying to build the perfect application. An MVP is essentially a skeleton of the product. It is just functional enough to communicate your message and experience given a very limited budget. Don’t get caught trying to boil the ocean here, or your platform will never make it off the ground. The purpose of the MVP is to gain feedback along the way to understand what users desire in the application. Only after you have wooed your intended audience and listened to their feedback, you can start adding the user-desired bells and whistles.
4. Define test markets
The initial test markets for your platform must be properly identified. Test markets are small, specified groups used to test the viability of a platform before its official launch. Test markets are most commonly siloed to a certain geographical region, such as a city or specific demographic of test users. The test markets must include a sample of users you anticipate will be using the platform going forward. Otherwise, your test marketing data will be unreliable and will not coincide with how your user base will react when the platform is launched. Third party firms that specialize in this type of testing are often hired at this stage in the development process. Accurate test marketing is critical because it allows for last second tweaks if any key issues are exposed.
5. Prepare for failure
No one likes to dwell on the potential of failing, yet the sobering truth is many platform launches are unsuccessful. At least some aspect of every platform endeavor will fall short of expectations. The key is to be realistic when setting goals and to remain motivated even when parts of the plan don’t come together perfectly. It’s wise to face the threat of failure head-on by creating a backup strategy in case your platform is ever at risk of falling on its face. The backup strategy might be other uses for the platform or something as simple as changing the target market for the platform. Favor started out as a burritos and beer delivery service and is now connecting delivery drivers who can deliver anything from food to school supplies.
Whether your goal is to transform the company into the next tech unicorn or simply create more efficiencies within your own walls, these five tips apply. At Trenegy, we have developed a comprehensive Multi-sided Platform Development Checklist that helps companies ensure they don’t miss a step. Contact info@trenegy.com for your complimentary copy today.
Using Platforms to Transform the Supply Chain
by Nicole Higle
An effective supply chain has three key elements: market exposure to products and services, direct collaboration between buyers and suppliers, and cost efficiency. Managing these elements in a traditional supply chain model is not easy. Buyers have to actively manage relationships with a variety of suppliers and often do not have clear visibility when new products or services are offered by new suppliers. Buyers fall into a one-sided supplier management rhythm, working the phones and email with who they know, as opposed to suppliers who provide the best products or services at the best price.
Today’s cloud-based platforms eliminate the old-fashioned routine. For example, consider Airbnb. The online dashboard creates value by enabling direct collaboration between homeowners and renters. Cloud-based supply chain models work in the same way. Buyers use the platform space to efficiently work with a broad range of suppliers, who in turn gain access to buyers in real-time.
Platforms are creating value for both buyers and suppliers by:
1. Increasing exposure and reducing prices
Platform business models provide a space for consumers to openly interact with producers, and vice versa. Likewise, a good supply chain platform will provide this space, and it will also offer incentives for becoming a member. As more members join the platform, seller presence becomes visible to an increased number of buyers, heightening product exposure. Then, as the network effect kicks in, elevated purchasing traffic drives down the cost of the product, allowing purchasers to obtain products at a lower cost.
2. Discontinuing service level agreements
Platforms define base compliance standards, reducing the need to manage SLAs. Standards for doing business are built within the platform itself and platform members agree to these terms in order to participate. Cloud-based supply chain platforms hold both sides accountable for meeting standards relating to shipping time, response time, annual fees, payment methods, refunds, etc. Terms and conditions will vary depending on the platform, so it’s important for potential participants to consider which guidelines are important and correspond with their business strategy.
3. Granting access to real-time information
Cloud-based supply chain management platforms place an emphasis on needs planning and scheduling. Buyers can upload inventory with predefined depletion notifications, allowing suppliers to bid on replacements in near real-time. Both sides have the ability to respond quickly to changes in the supply chain. If a shipment is late, suppliers can provide up-to-date tracking information. If a shipment is extremely late, buyers can access the platform to identify what is available and when.
4. Eliminating approved vendor lists
Because supply chain platforms ensure procurement from only approved suppliers, the practice of managing an approved vendors list becomes obsolete. Previously, buyers invested a great deal of research and vetting into determining which suppliers were approved. Now, buyers can use platforms to filter suppliers by established company protocols and purchasing criteria (e.g. product quality, timely delivery, and supplier ratings). Most importantly, existing suppliers can be bypassed or banned based on real-time evaluation of performance.
5. Making it easy to connect with and review new partnerships
Platform business models make it easy to shop around for potential suppliers and partners by providing direct feedback from other platform members. Similar to checking Yelp for restaurant reviews, business partners evaluate buyers and suppliers based on their experience. When your long-time pipe supplier is out of stock, there is no need to wait for inventory to be replenished. Instead, users can search for comparable products on the platform and review supplier ratings to make the best decision for current purchasing needs. References and reviews are beneficial for users looking to establish new connections or alternative options. They also help instill confidence in transacting with unfamiliar customers and suppliers.
Cloud-based platforms are changing supply chain procurement strategies for the better. With increased market exposure, easily realized cost efficiencies, and an environment for direct collaboration, platform business models are equipping buyers and suppliers with the tools necessary to succeed in supply chain management. Trenegy helps companies identify innovative solutions that streamline the procurement process. To learn more, contact us at info@trenegy.com.
The Gig Economy: Gigster Impact on Corporations
by Julie Baird
On Mondays and Fridays, Beth drives for Uber, picking up and dropping off traveling professionals at the airport. On Tuesdays and Wednesdays, she meets with some professionals she met through Sortfolio at a coffee shop to discuss her design recommendations for their new websites. If she feels like it, she can do someone a Favor by picking up a coffee and delivering it down the street. On Thursdays and Saturdays, she hangs picture frames and assembles IKEA furniture through TaskRabbit, or she just hangs out with her kids. Beth does not have a full-time job. Is Beth lazy? Is she uneducated? Maybe. Or maybe she has just chosen a different lifestyle—a more flexible approach to work.
What Is the Gig Economy?
The gig economy, or the sharing economy, is an economy comprised of short-term, temporary work contracts or freelance work. The gig economy “gigsters” perform a specific task and leave. The gig economy is commonly misconstrued as a way for workers with a full-time job to make extra money, but that is not always the case. Uber, Instacart, Favor, Rover, and TaskRabbit are platform companies known for building the companies on the gig economy. Many of the contract workers on the platforms hold various contract-based jobs across various platforms. The gig workforce is made up of more than just pay-per-service workers. Many technical professionals, including lawyers, accountants, and engineers are participating in the gig economy.
Why Is It Growing?
The continued advancement of technology will drive the expansion of the gig economy, as companies and individual workers are able to easily connect over platforms. With the growth of technology-based education, individuals can easily access specialized training to become experts in various industries, business processes, or subject matters. Platform businesses are leveraging technology to connect specialized white-collar professionals with large corporations. Companies are looking for people who have the targeted experience the company can use to solve specific problems. Designers, marketing staff, and IT specialists are common roles that can be filled on a freelance basis.
Millennials prefer flexible schedules that allow them to fully experience life, travel, and family. Having seen baby boomer parents trade family and life experience for a paycheck, most millennials are opting for seemingly less rigid opportunities. Traditional career advancement came in the form of a promotion, which meant more responsibility and money. The gig economy offers advancement by leveraging online learning opportunities and turning knowledge into immediate cash. The more knowledge and “stars” a freelancer earns on the platform results in more money. Flexibility is winning, and the rigid career ladder is not worth the trouble.
How Can It Benefit My Company?
Even large companies can benefit from leveraging the gig economy. Platforms give companies access to a larger talent pool to provide targeted expertise. For example, a talented tax professional is hired as a full-time employee. The tax professional may spend four hours per week focused on providing the high valued expertise for the company. To fill his work week, the remainder of the tax professional’s job includes more mundane paper work. The tax professional commands a high salary regardless of what other work he is assigned. The result? The company is paying more and the tax professional is bored with the mundane work. Fast forward to the gig economy: the company hires a less-skilled administrator for half the salary to handle the mundane tax paperwork and crowdsources a freelance tax professional when needed. The freelance tax professional may charge double for his four hours per week, and he can leverage his experience across several companies, increasing his flexibility and his return. Both the company and the tax professional win.
A Fad or a Real Trend?
As technology continues to improve, companies that ignore the gig economy may not be as competitive in the fight for talent. Companies should understand the unique values the core workforce and the gigsters each bring to the organization. The core workforce sustains competitive leadership and culture to the company while freelancers bring specialized knowledge when it is needed. Balancing both forces of workers is important for companies to capitalize on the opportunities provided by the gig economy.
The Cloud, Demystified
by Peter Purcell
It sounds ethereal and intangible, but the cloud is pretty down-to-earth. Simply put, the cloud stores data and programs on servers which can be easily accessed via the internet. This frees up data from being stored locally on an individual’s computer or an organization’s on-site server. Fittingly, The Weather Channel (TWC) utilizes the cloud to deliver fifteen billion forecasts each day. Such computing power allows weather.com to be available within one hundred milliseconds to anyone on earth—not an easy task. To transform from the tornado-chasing cable TV company to a global weather data provider, TWC acquired various companies around the world, leading to a complicated IT infrastructure that contained over a dozen data centers. By moving to the cloud, TWC cut the number of data centers in half, saving $1 million each year. TWC is not the only company leveraging the cloud.
Though the cloud is one of the tech industry’s most popular technologies, it actually preceded the World Wide Web. In fact, the concepts that led to the cloud began when computers were still the size of rooms. Too costly for a single company to purchase and maintain, users would access these computers/mainframes through time-sharing, similar to the pay-as-you-go plans that many cloud companies currently utilize. A brief history follows:
- 1969 – J.C.R Licklider developed ARPANET (Advanced Research Projects Agency Network), the basis for what we know as the internet today.
- 1970s – IBM improved on time-sharing by creating Virtual Machines, which allowed multiple virtual systems on one computer.
- 1990s – Telecommunication companies developed Virtual Private Networks (VPNs), providing more control over networks and bandwidth.
- 2000s – The term “the cloud” was coined to refer to any shared pool of computer resources accessed over the internet.
To help clear the air on the cloud, The National Institute of Standards and Technology has coined five essential characteristics defining cloud technologies:
- On-demand self service – Providing computing resources on an as-needed basis without human interaction (e.g. signing up for a new app is immediate).
- Broad network access – Access to data and programs over the internet on multiple devices, from desktop to mobile (e.g. accessing home cameras from smartphone and desktop).
- Resource pooling – Combining computing resources to serve multiple users, often from different locations (e.g. Amazon Web Services providing infrastructure).
- Rapid elasticity – Scaling to size and power of the user, no more and no less (e.g. adding more space for iCloud photos).
- Measured service – Monitoring and controlling usage, similar to utilities (e.g. Netflix measuring what is watched).
Today, not all clouds are the same. With the availability of multiple cloud deployment models, there’s a good fit for consumers of all kinds. What about cybersecurity? It’s an outdated, uninformed fear that data on the cloud is not secure. In fact, data stored onsite or in proprietary servers is actually less secure and more susceptible to cybersecurity attacks than most cloud services today. Storing data on the cloud can easily be part of an organization’s airtight controls environment and proactive risk management. The virtual cloud has four different deployment models to meet different consumers’ needs:
- Public cloud: A cloud open to use by the general public owned by a business, educational, or government institution. There’s not much difference between a public and private cloud other than access. Even a company’s use of a public cloud can be SOX compliant and part of a successful internal controls environment. Common examples of public clouds include Dropbox or Google Drive.
- Private cloud: A cloud dedicated to a single organization with similar advantages to the public cloud but with increased security and regulations. Private clouds can be owned and operated by the company, a third party, or both. Cloud servers can be located on or off site with computing resources protected by the organization’s firewall.
- Community cloud: A private cloud shared with multiple organizations. Organizations in healthcare or government can benefit from such a model.
- Hybrid cloud: A cloud model with any combination of the other cloud models. Large corporations may use this model to protect trade secrets while also benefiting from the scalability of a public cloud provider.
Next time someone speaks of the cloud in an ominous tone, ask if they have checked the weather for this weekend or if they have seen “House of Cards” on Netflix. The cloud has freed computing power from the limitations of a single device and has revolutionized when and where companies conduct business. Utilizing the service and deployment models mentioned above will open all kinds of possibilities for individuals and organizations, including cost savings, risk mitigation, and enhanced controls. The sky’s the limit.
Hosted Solutions: Looking to the Cloud
by Lauren Saathoff
“There’s an app for that!” is heard countless times on television, radio, and in everyday conversation. The cell phone has evolved from a communication device to a personal assistant, revolutionizing the way we live.
The same can be said about cloud-based or hosted solutions for business. Similar to mobile applications, cloud-based solutions allow for more streamlined and highly efficient business processes such as budgeting and forecasting, AP, JIB procession, and CRM. Organizations can use hosted solutions to increase efficiency and decrease total cost of ownership in comparison to an installed software.
Cloud-based hosted solutions allow organizations to give their employees instantaneous access via a secure internet connection and a user-friendly interface rather than confining data to a hard drive or an internal network. A hosted solution also requires a significant workload shift. Installation of software updates, backups, and hardware maintenance responsibilities are taken on by a third-party service provider rather than an in-house IT department. This allows the IT department to shift their focus to supporting a company’s day-to-day revenue generating activities.
Smart phones revolutionized the way humans operate on a day-to-day basis, and hosted solutions are revolutionizing the way organizations approach their day-to-day business processes. Yet there are still misperceptions about application security, cost, and ability to customize in a hosted environment. Hosting companies have addressed each of these.
Security
This is the number one fear organizations face when contemplating switching to a cloud-based hosted solution. Unless an organization has a full time IT staff dedicated to maintaining and backing up their data, a hosted solution can provide more security than an installed software. This is because vendors have more resources to invest in highly secure servers, facilities, and a full-time staff dedicated to ensuring all data is encrypted. It is also important to note that risk decreases significantly when servers are not housed on site. Critical data can be accessed anywhere with a secure internet connection, despite server damage or data loss. On the other hand, data may take up to months to recover when using an installed software.
Not all third-party cloud providers have the same security policies, so it’s critical to ask questions in order to understand the policy:
- How much access do you have to my data?
- What risk do I assume in choosing you as my service provider?
- How well is my data protected?
It’s important to remember that vendors handle data with extra precaution because their reputation is on the line once a contract is signed.
Cost
Purchasing a hosted solution requires an upfront fee and a monthly or annual fee. The fee is often a factor of the number of users an organization has and the amount of data they are storing in the system. The initial start-up fee is significantly lower than an installed software.
Cloud-based hosted solutions do not require a substantial initial investment, because organizations are no longer required to integrate additional hardware to support software deployment or purchase additional storage space to house servers. Maintenance is virtually eliminated as all is taken care of on an external network through a third-party provider.
Using a hosted solution does not require organizations to purchase hardware, hire a dedicated IT staff for support, or purchase and configure software onto individual computers. Therefore, hosted solution integrations require weeks compared to years that organizations require for installed software implementations. This yields a greater ROI over the lifetime of the application.
Customization
There is not a single application in which smartphone users can manage finances, access social media, check email, and listen to music. Similarly, organizations aren’t restricted to a single application to meet all of their business needs. The possibilities are endless and hosted solutions are able to integrate with virtually any software with API tools.
Although cloud-based hosted solutions may not be customized to an organization’s every specification, one of the primary concerns for hosted solution vendors is ensuring their client is comfortable with the application interface with training, help boards, and 24/7 support. Clients import and export data using a common data interface, lending to familiarity with the solution.
Another benefit to choosing a hosted solution is that they are flexible enough to meet an organization’s changing business demands. For example, adding and removing users is as simple as a click of a button.
To Host or Not to Host?
Cloud-based hosted solutions provide several benefits, including decreased TCO when compared to an installed software, increased accessibility, and scalability to meet business demands. Organizations still need to examine critical success factors like integration and alignment with current software to determine if it will meet current and future state needs. It’s also important to conduct a rigorous package selection to determine what vendor most closely aligns with their needs.
Software as a Service, Demystified
by Peter Purcell
Sunday evenings tend to creep up on those blissfully enjoying the weekend. All of a sudden, five o’clock rolls around and it’s time to think of dinner plans. Italian sounds good on this chilly winter evening. And now for all the yummy Italian options: drive up the street to Gaetano’s for a nice dinner out, call Tony P’s for take-out, or just heat up some store-bought sauce in a jar and throw it on top of some day-old pasta? Gaetano’s it is for a fancy, full-service dining experience.
The decision to dine at home or at a full-service restaurant is similar to the decisions companies make when evaluating whether they buy Software as a service (SaaS) or decide to manage the IT software and hardware inside the company. When dining at a restaurant, people are given food (software) on the restaurant’s table, with utensils and napkins (hardware). Servers take care of orders and bring everything to the table (service to ensure the software is available when needed). When dining at home, people must make their own food (build software in-house), use their own utensils and table (buy the hardware and install it), and clean up afterward (support the software and hardware inside the company). often, avoiding cleanup is the reason for choosing to dine out in the first place.
SaaS is a concept gaining serious traction throughout the business world. The idea is that technology companies can provide computing resources over the cloud and support a company’s IT environment as a service. The cloud, not to be confused with the white, nebulous blobs in the sky, is merely a catchy phrase to describe the process of delivering computing power over the internet. The “as a Service” component means a service is provided along with the software. The cloud would equate to going to Tony P’s for takeout—no service, just food. The cloud becomes SaaS when there is a service provided—dining out.
By handling all of the software, hardware, and support as a service, SaaS providers can package services for a simple fee. This includes procuring and maintaining massive servers, facilitating software updates, implementing security protocols, and troubleshooting issues. The traditional on-premise IT model requires a company to handle all of the services mentioned above inside the company. With the SaaS model, all a company needs is an internet connection and a browser to access resources otherwise requiring serious technical know-how. The SaaS model has helped companies take a large part of the IT burden off of the company’s shoulders. Years ago, the IT department would place a capital request for a multi-million dollar data center to handle growth. Today, this level of capital spending is no longer necessary.
Let’s Get SaaS-y
SaaS technology allows end-users to access entire software applications over the internet with a smaller IT footprint. This reduces IT overhead, making SaaS solutions an attractive option for companies who have gone with the traditional (and expensive) on-premise model of software delivery in the past.
Before SaaS became one of the trendy, overused business buzzwords, Application Service Providers (ASPs) had a similar vision during the dotcom boom of the late 90s. Similar to SaaS solutions, the ASP companies handled all IT needs, allowing customers to access the software applications through an internet browser. The ASP companies were slightly ahead of their time. Technology had not yet reached the level needed to make this concept effective and the ASPs were not as successful as hoped. As years went by and technological advancements were made, the ASP model was revisited, tweaked, and voila! SaaS was born.
The growing success of SaaS companies can be attributed to their incorporation of the cloud and the use of multi-tenant architecture. Salesforce, by far the most successful SaaS company to date, provides an excellent analogy for the multi-tenant concept. Think of multi-tenancy as renting an office in a high-rise building. You have your own space within the building where you can keep all of your information private, but day-to-day operations like repairs, maintenance, and security are handled by the building owner and included in the rent. The amenities are shared by everyone, reducing the costs at the individual level. This is more or less how SaaS solutions are able to operate and keep costs reasonable. Without getting too technical, this setup allows for greater scalability, faster performance, and simpler maintenance by using the same base code and database platform for all users as opposed to tailoring the software to the needs of each individual customer.
On-premise vs. SaaS
SaaS has strengths and weaknesses. SaaS is more economical and quicker to implement while on-premise solutions provide a company with more flexibility.
Massive enterprise software packages that have been customized to meet the unique needs of a large global organization may not be candidates for a SaaS solution. SaaS solutions shine when used to handle specific, individual business functions. This approach makes SaaS solutions particularly attractive for small and medium-sized businesses that don’t want to hire a large IT staff. A few of the most popular SaaS software solutions include:
- Salesforce: Salesforce is the largest SaaS company in the world. Their customer relationship management (CRM) software allows businesses to manage everything related to sales and marketing. Their product is also a SaaS development platform.
- Workday: This human capital and financial management solution can be used for a myriad of HR and administrative processes.
- Dropbox: This cloud-based file sharing software allows users to update folders and documents in real time from multiple locations.
- Anaplan: This planning and analytic platform is used for finance, sales, operations, and HR planning.
Companies all over the world, large and small, are utilizing SaaS solutions to help enable various parts of their business. A combination of both on-premise and cloud-based solutions is becoming increasingly popular, as the SaaS solution can often be integrated or bolted on to existing systems.
The Last Bite
Will SaaS be the magical end-all-be-all solution to all IT woes? Probably not. If a company’s on-premise software solutions are becoming a real pain, then making a company a little SaaS-ier can definitely help. Companies from all industries are adopting SaaS as a way to improve the bottom line.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
3 Ways to Prepare for Moving to the Cloud
by Nicole Higle
The movement to modernize IT is influencing companies to move to the cloud. Slow adopters haven’t made the switch, but many have near-term plans to make the change. Companies migrating to the cloud expect to immediately realize positive impacts, like reduced operating costs, efficiencies in connecting users across geographic locations, and transferring administrative IT responsibilities. In reality, moving to the cloud will become exceptionally more challenging if companies do not first employ adequate methods to address IT demand with existing infrastructure. Companies should establish reliable procedures, like the following, to prioritize and manage IT demand to help mitigate risks when moving to the cloud.
1. Clearly define the difference between request types
Before processes can be designed to address IT demand, categories for issues and request types must be defined. Keep this simple. Companies tend to get caught up defining a complex hierarchy of issue categories that only end up being used incorrectly. The more options users have, the more likely they are to select the wrong one.
Eliminate categories for defects, bugs, improvements, new features, etc., and consider using a simplified version. Ask the following questions to determine the issue or request type:
- Was it working before (i.e. unable to connect to printer)? = Break Fix
- If it wasn’t working before, is it a simple update (i.e. add a new vendor type)? = Quick Fix
- Is the request for new functionality (i.e. feed financial data to reporting tool)? = Enhancement
Keeping request categories simple helps users and IT support better understand how requests should be tagged for resolution. It also provides a more accurate foundation for trend analysis and reporting.
2. Establish a process to prioritize and manage planned IT demand
Enhancement requests are considered planned IT demand. Companies can evaluate the scope of the enhancement, assign necessary resources for development and support, and schedule releases as far as six months down the road. Simple, right? What often happens is help desk support mis-categorizes the enhancement request, the request gets lost in the help desk ticketing system black hole, and the requester feels ignored.
The first step to managing planned IT demand is to require a business case for the requested enhancement. Again, keep it simple. This doesn’t need to be laborious, but answering the questions, “Why is the improvement necessary?” and, “How much time will it save?” will help build a business case for allocating time and resources to address the request.
IT should form a small committee and conduct weekly stand-up meetings to prioritize approved business cases and plan for upcoming enhancements. Managing planned IT demand works best with an agile methodology, and utilizing a waffle board can keep release schedules organized.
The key takeaway here is don’t send enhancement requests through a help desk ticketing system to vanish. Invest in an agile development tracking tool to help align upcoming releases with available resources.
3. Explore alternatives to manage unplanned IT demand
Unplanned demand comes in two forms categorized in the previous sections as “break fix” or “quick fix.” The most common approach for managing these unplanned needs is to route them through an internal help desk system, which results in a variety of problems:
- Tickets are assigned to multiple people before the appropriate resource is identified
- IT support can’t reach the ticket submitter for clarification and testing
- A backlog of tickets pile up with low visibility into bandwidth of IT support staff
With the accessibility of AI over the last year, solving IT problems is becoming easier. But it’s not solved completely. Explore alternatives to the traditional help desk and invest in a tool that enables faster, more accurate support when you need it.
Dispelling IT Outsourcing Myths
by Peter Purcell
Managing IT in house is like being your own shadetree mechanic. If you’re going to do it, you better enjoy it. I enjoy working on my own cars, though I don’t believe the myth that I’m saving money. Home mechanic work always takes longer than anticipated. I have to use manuals, follow step-by-step instructions, and make frequent visits to the auto parts store. I usually end up buying expensive, specialized tools I only use once. For most fast-growing companies, managing IT is no different.
Growth organizations place stress on IT departments. During an organization’s startup years, the IT department consists of a person supporting a small server rack in a closet. Network needs can be met using a wireless router. Help desk requests are solved by shouts down the hall. As an organization experiences growth, the complexity of the IT department grows exponentially.
Growing IT demands result in the creation of various IT functions. These functions include help desk, networking, server, applications support, database administration, and security groups. Each of these functions is necessary to support the growing business’ IT needs, however, economies of scale are rarely achieved. Critical positions require backup, resulting in underutilized staff.
Demand for hardware and software increase. Critical applications are monitored with tools that are rarely fully utilized and servers have redundant backups in case the primary crashes. Our research has shown that more than half of the mid-sized, high-growth companies are using less than 10% of their hardware capacity!
Organizations continue to pay for underutilized human and computer resources, so outsourcing IT becomes a hot topic. However, many high-growth organization executives are hesitant to outsource given the relative unknowns in the IT world. These unknowns are found in five myths we commonly hear regarding IT outsourcing.
Myth 1: Our organization isn’t big enough for the outsourcing providers to care about
Historically, IT pundits argued that most IT outsourcing providers cater to larger organizations and don’t provide a cost effective solution for the middle market. This was a valid argument ten years ago when outsourcing options were limited to large service providers. These companies were geared for large-scale outsourcing for big organizations. Today, this has changed.
A number of mid-market outsource providers cater to middle market companies. Many mid-market outsourcing provider executives worked for the larger outsourcers and recognized the service offerings could benefit mid-sized companies. The new companies have the same level of discipline and service as the larger outsource providers, but at a lower cost. Furthermore, these outsource providers have established a solid track record and become financially stable.
Myth 2: Outsourcing is not a cost effective solution since the outsourcer is making a profit
With outsourcing, high-growth companies no longer pay for redundant resources, only for the resources that are used. Furthermore, the largest hidden cost in an IT organization is the cost of upgrading hardware and software to take advantage of new releases. For an in-house IT organization, all of the upgrade costs are absorbed by the organization. In an outsourced environment, the upgrade cost can be shared across multiple customers of the outsourcer. This benefit more than offsets the outsourcer’s profit margin.
In an internally managed IT environment, hardware and software is typically purchased with “growing room,” which results in excess capacity. In an outsourced agreement, the IT environment is sized to support the company in the short term and will flex as the company grows.
Myth 3: Our data won’t be as secure as it is today
For many years there has been an argument that a company’s data must be stored on hard drives located on the organization’s premises for information to be secure. Somehow, the ability to see the disk drives where data is stored equates to security. In today’s world, having on-site data is less relevant than the security that is put into place surrounding the location.
Most outsourcers have spent a great deal of money and effort in developing hardened bunkers for their clients. Their core expertise is providing data protection from unauthorized access and natural disasters. The security tools and hardened facilities cost more than many mid-sized companies can afford on their own. Therefore, moving data to hard drives protected and managed by an outsourcer can provide adequate and cost effective security and data control.
Myth 4: Service levels will not be as good as they are today
We hear the argument that good IT customer service can only be obtained if the IT support staff are actually on site. We find this to be more of a comfort and convenience request, not a requirement. The sense of security achieved as a result of being able to walk down the hall and get immediate attention is a luxury, not typically a business necessity. With today’s access to online information and the communication technologies outsourcers provide, adequate support can be provided remotely at a lower cost. Moving to a remote support model is more of a change management issue than anything else.
Furthermore, a mid-sized company does not have to pay for a variety of full time specialists to be on staff to support the technology environment. These companies can take advantage of the economies of scale provided by the outsourcers and only pay for the services being used on an as-needed basis.
Myth 5: An outsource provider cannot support our specialized applications
This is the toughest argument facing outsourcing. Many organizations have specialized applications that need rapid IT support, requiring a specialized help desk. While the argument to keep this IT service in-house has some validity, an outsourcer can hire and provide these specialized resources just as easily as any organization. In fact, outsourcers will often assess and hire a customer’s more skilled IT staff to support the specialized applications, which results in a win-win situation for everyone involved. These specialized resources can be given more career growth and training options with the outsourcer, which results in less turnover of specialized support.
If You Love IT…
Outsourcing IT is not for every company. Organizations that thrive on IT capabilities as a competitive differentiator are not likely candidates. The larger, well established organizations that have already invested in world class IT capabilities will likely find outsourcing IT to be cost prohibitive. Since outsourcing IT comes in various forms and alternatives, organizations may want to carefully consider what parts of IT to outsource or keep in house.
While Trenegy does not provide IT outsourcing services, we can provide an independent perspective on the alternatives.
Business Process Outsourcing
by William Aimone
Business Process Outsourcing (BPO) is the process of hiring a third party to perform specific business functions or processes for a company. BPO is a common and growing business practice often thrown into five categories: onshore, nearshore, offshore, front office, and back office. Onshore, nearshore, and offshore outsourcing are types of BPO by geographic location.
Onshore
Onshore outsourcing is outsourcing to a vendor that resides within the same country of the operating business. Onshore is desirable because of its close proximity, allowing the company to monitor the work and receive quick responses. Most instances of outsourcing are onshore due to the ease of implementation and management, not to mention the positive public perception of creating jobs within the home country. The only drawback is a perceived high cost.
Nearshore
Nearshore outsourcing is outsourcing to a vendor in a nearby country. For example, Mexico and Canada are nearby outsource countries to the United States. The major benefits here is time zone and a potential for lower labor costs. In terms of cost, nearshore is perceived to be less expensive than onshore but still not considered cost effective.
Offshore
Offshore outsourcing is outsourcing to a vendor in a foreign country, most commonly India, China, or the Philippines. The lower labor cost countries house the most offshore outsourced companies because of the density of educated, multi-lingual personnel and low cost. The language and cultural difference makes offshore outsourcing difficult for many companies and customers, and quality issues tend to be more pervasive. Also, the perceived costs are lower. Many companies tend to only see the visible labor savings of offshoring and don’t account for the hidden costs, like time zone conflicts, language barriers, and high staff turnover.
Back Office and Front Office
Now for the remaining two categories. Let’s discuss the different types of outsourcing based upon what part of the company is being outsourced. The back office takes responsibility for supporting the company and performing work that does not directly generate revenue or interact with the customers. Examples include: Accounting, Information Technology, Internal Audit, and HR. The front office includes the customer or client-facing departments, typically the revenue-generating areas of the company. This includes Sales, Manufacturing, and Customer Service.
The back office function can be outsourced in part or in full. For example, a business can only outsource invoice processing for accounts payable, or they could outsource the entire human resources department. In either case, it is important to have a manager responsible for communication between the business and the outsource vendor to ensure tasks are completed timely and accurately.
The front office function can be further categorized as service and manufacturing outsourcing, not to be confused with the service and manufacturing industries. Front office service outsourcing is a service provided to a customer or client by a non-disclosed entity. In other words, it is a business providing a service to a customer through an outside company. Customer service call centers is a prime example. Answering agents are employed by the BPO, not the actual company whose brand name is on the product. A customer may think they are calling one company when they are actually routed to an outsourced vendor.
Front office manufacturing outsourcing refers to a manufacturer that assists in the full or partial production of an item then sells the item under the brand of the product. Manufacturing outsourcing is common in the grocery business. The grocery store’s private brand products are likely manufactured by another company, not the grocery store.
Coke was one of the first and is now one of the largest companies outsourcing many areas of their business.
After failing in several business ventures, John Stith Pemberton created a product that would infiltrate world markets—a Bordeaux wine with coca leaf drink, better known as Coca-Cola. Although Pemberton’s product continued to grow in popularity, he struggled with the debts of his past business failures. He decided to outsource the bottling function of his supply chain process to an outsource manufacturer who could bottle products more efficiently. It also reduced the company’s involvement, saving time and resources. Still today, Coke’s bottling partner manufactures, packages, merchandises, and distributes the final beverage. Coke was one of the first businesses to use BPO and continues to outsource many functions throughout the company.
Many organizations believe outsourcing is a simple way to reduce overhead expenses. Not necessarily. Organizations often find outsourcing costs to be higher than performing the work internally.
Companies deciding to outsource for competitive reasons (private branding of groceries), for risk reduction, or to eliminate a time-consuming service (customer support) often have a more successful outsourcing experience. Companies who outsource for the sole purpose of cost savings usually end the outsourcing relationship.
Next time you want to eliminate a frustration in your life or company, think of BPO.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
How to Do More with Less in IT
by Peter Purcell
Many companies are striving to be more agile, efficient, and productive in response to uncertain economic conditions in 2016. Capital projects have been canceled while companies shift their attention to surviving in the current environment without hindering their ability to expand in the future. Functional areas are facing significant pressure to cut costs and “do more with less.” Successful cost reduction or right-sizing efforts result in organizational realignment, process improvement, and system changes.
Cost reduction initiatives will have a significant impact on IT. Not only will IT be asked to do more with less, but they will also face increased demand to make changes to existing systems in support of functional area realignment. Forward-looking CIOs and IT departments should proactively focus on:
- Reducing costs
- Increasing efficiency
- Improving security
- Migrating to the cloud
Reducing Costs
The unfortunate reality is that companies are overpaying for IT services. The real challenge for a CIO is to perform an unbiased review of operations when identifying cost reduction opportunities. IT personnel will state that IT is already lean and cannot endure additional budget cuts. Cost reduction opportunities will be identified if IT starts by focusing on licensing fees, projects and personnel. None are easy to attack, but are areas that should be reviewed.
Companies pay maintenance on software that is no longer used. While initiatives to review software licenses and determine which maintenance fees should be discontinued are started, they are rarely finished. Personnel hate this exercise because it exposes rogue software purchases and raises questions about IT’s ability to control access to the technical environment. This is a worthwhile exercise given the potential savings that can be quickly identified. A similar exercise can be performed on existing hardware platforms, but may have a smaller benefit.
The IT steering committee should review all projects to determine which can be cancelled or delayed. Reviewing upcoming projects is easy if the right guiding principles are created and used. Only continue the projects that increase revenue, impact market share, or support compliance. All other projects should be delayed. In-flight projects should undergo the same scrutiny but may require more discussion.
Reviewing the IT organizational structure to identify cost savings is a difficult yet necessary exercise. Most IT departments have grown along with the company without significant thought for optimal organization structure. When times are flush, it’s easy to add personnel to plug gaps without determining the long-term impact on costs. Realigning the organization around a framework like COBIT 5 will help ensure business is properly supported in the long term.
Increasing Efficiency
IT and business should work together to determine how to better leverage existing systems to improve process efficiency. A quick review of trouble tickets can help IT develop an inventory of complaints where workflow, configurations, and heavily modified code are hindering employees’ ability to perform day-to-day activities. A joint effort to change processes while refining system configurations can have a huge positive impact, sometimes leading to elimination of positions within the business.
Changing business conditions or processes could also lead to elimination of heavily modified code, reducing the amount of effort required to support the system. Specialized external resources may no longer be needed to monitor and continually modify the code. Internal support resources could be redirected to focus on other areas that can increase business efficiency.
Extending the existing systems with new modules or add-on capabilities may be counterintuitive in a down economy. However, this is the perfect time to take a hard look at how new functionality can support revenue-generating processes. IT should work with business to ensure processes are rationalized, efficient, and effective, then determine if additional functionality is required to support the new environment. A strong business case could lead to implementation of new CRM, field service, asset management, or other systems.
Improving IT Security
IT breaches, no matter how minor, can lead to a significant expense. The consultant fees and data loss liability can quickly add up whenever a system breach is detected. IT can work with the business to increase employees’ awareness of cybersafety. Implementing new processes and awareness programs is tantamount to an inexpensive insurance policy to avoid the cost of a breach. Successful cybersecurity programs can also help reduce the need for expensive cybersecurity detection, penetration, and removal tools.
Migrating to the Cloud
Companies that have data center managed services contracts should take a hard look at migrating to the cloud. Many data center contracts were not written with clauses to reduce capacity if a company shrinks. As a result, companies are paying for more capacity and service than needed. Going through an exercise to determine the cost savings of moving to the cloud can encourage the managed services provider to reduce contract costs. If not, then moving to the cloud can often reduce the overall cost of running core systems. And moving to the cloud will help ensure IT’s ability to support growth in the future, efficiently and cost effectively.
It’s difficult to predict exactly what will happen in the economy this year. However, IT should get ahead of cost reduction activities and strive to be agile, efficient, and productive.
The Untapped Power of Journey Mapping
by Erika Clements
Have you ever tried to look at a 3D image without the accompanying 3D glasses? The image is somewhat blurry and doubled, heavily washed out with varying shades of blues and reds. You can make out the image, of course, but you aren’t getting the true picture. Put on the 3D glasses, and it’s a whole different story. Not only is the image clear, but you are seeing it in greater quality and dimensionality than ever before.
This metaphor easily translates to preparing for and carrying out system implementations. The information gathered through a typical procedure—reviewing documentation, processes, and brief employee interviews—gives only a blurry, discolored perspective.
So, you want the 3D glasses? In implementations, glasses come in the form of journey mapping.
What Is Journey Mapping?
Journey mapping considers a user’s experience from beginning to end of a process. Journey mapping should be done for any process that 1) interacts with the customer (internal or external), or 2) is a frequently completed process. While it’s often used to improve customer experience, the process can also increase efficiency and user acceptance throughout an implementation. Though the employees (users) of the system are not a typical “customer,” they’re the individuals on the receiving end of the new system. The users’ buy-in or rejection of the solution can have very real consequences on the attainment of implementation goals.
Journey maps view process flows from a new angle. Rather than interviewing employees and simply noting the hard facts relating to the process, journey mapping actually takes into account an employee’s thoughts, feelings, pain points, and frustrations associated with the process being defined. It is important to understand that journey mapping does not slow the implementation process down at all. As noted previously, all the information is already being conveyed during interviews, workshops, and facilitated sessions. The data just hasn’t been meaningfully captured in the past. Journey mapping supplements the design phase with richer and deeper information, making change management even more powerful.
The majority of implementations consist of a company selecting a best fit system and making the necessary customizations to meet the company’s critical requirements. Breaking down the typical implementation process shows how journey maps can be woven into the plan to strengthen the overall implementation.
Selection
Selecting a system based on process flows alone paints an ideal but not necessarily realistic picture. Process flows often show how a vast array of functionality can be used, though a journey map would reveal that much of this functionality is non-essential. The resulting benefit is the ability to choose a less expensive solution with less features and potentially even greater efficiency. For example, a manufacturing and distribution company assumed providing competitive costs during the sales order process was critical to customers. Filling this requirement would have resulted in the acquisition of an expensive third-party product. Through the use of journey maps, the company discovered customers were indifferent, and the costly third-party product would have been a colossal waste of money. Journey maps enable companies to target what they need to buy and implement, inevitably leading to a reduction in overall cost.
Benefit in selection: While process flows show which features and functionalities could be used, a journey map shows that not all are needed.
Configuration and Testing
Throughout configuration, journey maps help to focus efforts in potentially unexpected but high return areas. For example, the standard AP invoice processing functionality in all ERP systems will result in paid bills. However, journey mapping the process often exposes the need for custom forms and screens to support head’s down data entry in high volume environments. Without this, the bill paying process would slow, making it difficult to obtain critical supplies in a timely manner.
Benefit in configuration: A process flow will show that “out-of-the-box” would work, but the journey map shows that it’s not as efficient and creates a huge bottleneck.
Journey maps are used throughout testing. From the very beginning, they are referenced when creating test scripts to ensure critical requirements are adequately tested. Process flows help test if the system accomplishes its end goal, and journey maps allow testing that confirms the end goal is reached with the least resistance, room for error, or employee frustration.
During testing, journey maps confirm that the processes and system configurations make sense by highlighting bottlenecks or areas where there’s a great amount of change. For example, if a new field ticketing system requires people to use structure, price lists, and technology they are unfamiliar with, journey maps will identify and highlight the new process as high priority for testing. Ensuring all critical requirements are operating per user expectation eliminates resistance during training and go-live.
Benefit in testing: A process flow will show that the process is streamlined and doable, the journey map shows that it is not done easily but then serves as a roadmap to navigate the bumpy road ahead.
Training and Rollout
Journey maps help companies focus on the right training so training is not wasted. The groups that undergo the greatest change or trouble are identified and prioritized during training and support. Leadership is empowered with an in-depth understanding of the steps of the process that have been simplified for users, as well as the steps that will be more tedious and potentially frustrating. With this knowledge, leadership can clearly explain the purpose and importance of more tedious steps to aid in change management. Journey maps help to apply change management concepts in a realistic manner, enabling effective and lasting change management. Following rollout of the new system, journey maps aid leadership in prioritizing support and ongoing training, first to areas of highest criticality.
Benefit in rollout: A process flow shows the steps for training. The journey map shows the feelings, expectations, and stumbling blocks associated with these steps, equipping leadership to train with sensitivity to achieve lasting change management.
In every step of the implementation process, there is a place for journey maps to make the overall picture more clear, leading to stronger and more effective implementations.
Data Management Leading Practices
Turning Data Into Action: 3 Ways to Get More Value from Your Data
by Nicole Higle
How many times has a meeting concluded with action items to collect more data for analysis and reporting? The likely answer is too often. Companies get bogged down with data wish lists without first setting the foundation for accurate, basic reporting. Companies can get more value from existing data by cleaning up master data, conquering basic metrics, and implementing a reporting tool.
Clean up Master Data
Master data lays the groundwork for analysis and reporting. Companies with unreliable master data are impaired when it comes to using master data driven reports for trend analysis and decision making.
Think of the last time you moved—for how long did the previous owner’s mail continue to be delivered? Now imagine this same example with a manufacturing company who regularly sends large shipments across the globe but fails to maintain customer addresses. Inaccurate addresses mean skyrocketing shipping charges and a less desirable P&L.
Fixed asset master data is a pain point for capex-heavy organizations. Management is constantly requesting asset utilization and profitability reports, but manual manipulation and a substantial amount of estimation is required to account for unaligned and missing data. Failing to keep current logs of equipment activity makes it difficult to rely on usage metrics as a basis to plan for future capex purchasing.
Clean and reliable master data is the foundation for reporting and analysis. While cleanup efforts can be extensive, the result increases the accuracy of reporting tied to master data, providing a more accurate basis for future decision making.
Conquer Basic Metrics
Companies like to test the tried and true expression, “You can’t run before you can walk.” Spoiler alert—those who try, fail. Advanced reporting cannot be implemented without first mastering the basics.
Get rid of the list of nice-to-haves and focus on financials required for shareholder disclosure and basic metrics which provide insight into the different functional areas of the company. Basics might include metrics such as days to bill, employee turnover, customer retention, etc. While these metrics sound obvious, when Finance and Operations in oilfield services are asked how to capture days to bill, their responses differ. Operations explains days to bill is the number of days to send a customer invoice from the time the field ticket is signed by the customer. Finance argues the count does not begin until the signed field ticket is scanned and uploaded to the AR inbox, which is on average a difference of 2-4 days.
Basic and reliable metrics set the foundation for more advanced reporting. Even more important is having organizational alignment and understanding on what these basic metrics mean. Performing a reporting strategy will ensure all organizational functions are on the same page with measured data.
Invest in a Reporting Tool
Once organizations become skilled at presenting basic, manual reporting packages, a reporting tool can help streamline data management for more advanced reporting.
When a company manages master data in disparate systems, it becomes difficult to merge data sets and ensure the information presented is in real-time and accurate. Basic reporting tools can help eliminate siloed data and foster a cross-functional environment for reporting. In addition to reducing the time to manually compile information from multiple data sources and automate report distribution, reporting tools will aide in identifying business trends over time.
Clean and simple data provides the best results for organizational decision making. Stop adding to the data capture wish list and focus on cleaning up master data and getting basic metrics in place, first. Then, consider investing in a BI reporting tool for more challenging reporting.
Master Data Management: Who, What, Why, and How
by Nicole Higle
It’s day three of driving a brand-new, shiny SUV around town when the letter carrier delivers an unexpected letter from an unfamiliar tire manufacturer. The letter explains the tires on which the oh-so-pretty SUV sits have been observed to unexpectedly explode when traveling at high speeds. After remembering going 85mph down the highway in a rush to get to work yesterday, the following question arises: How do tire manufacturers determine who is driving on their tires? The answer: Master Data Management (MDM).
What Is MDM?
MDM is a process-based model used by companies to consolidate and distribute important information, or master data. The idea is to have an accurate version of master data available for the entire organization to reference.
Master data is the agreed-upon core data set of a business. As opposed to reference or transactional data, which could be something as mundane as the amount of invoices completed in a day, master data refers to data directly linked to the meat of a business.
Master data varies depending on the organization and industry, but typically includes detailed information about vendors, customers, products, and accounts. Master data is critical, because conducting business transactions without it is near impossible. Without first establishing product codes for a particular model of tire, the manufacturer would not be able to track which tires are sold to which customers.
Why Is MDM Important?
In the example above, the only way the tire manufacturer would have the correct customer information for the owner of the new SUV is if the dealership gave it to them. And you can already see why maintaining a database of all their customers is important. Sure, they might inundate their customers with flyers and ads in the mail, but wouldn’t you appreciate the notification about potential tire explosions?
Managing master data is important, because business decisions are based on the story the company’s data is telling. Even the simplest of errors in master data will trickle down, causing magnified errors in other applications utilizing the flawed information.
Companies with nonexistent or underdeveloped MDM processes often encounter finger-pointing and displaced blame as a result of data discrepancies. Data discrepancies can be seen when month-end sales reports are delivered with conflicting data in the accounting systems and manufacturing systems. Discrepancies make it difficult to determine which system, if any, has the most accurate information.
Who Is Responsible for MDM?
MDM is often mistaken for data quality projects or technology systems owned by IT. Although IT may be involved in the distribution of master data, there is not one sole owner of MDM. To be successful in maintaining the integrity of critical company data, there must be a company-wide effort to the ongoing maintenance of master data.
It is important to clearly define ownership of the components of MDM including: establishing data governance (standards around how data is used), creating an MDM strategy, and developing procedures for maintaining and distributing information to the people who need it.
A successful MDM program should include holding people accountable for maintaining master data and streamlining the sharing of critical data between departments.
How Do I Create an MDM Organization?
Improper maintenance of master data causes reporting inaccuracies and can lead to poor business decisions. The steps below should be followed to establish an MDM organization:
- Define which data is master data—products, customers, vendors, etc.
- Determine primary data sources and consumers—the CRM system owns the customer master list, which is owned by the credit department and used by the sales team
- Designate ownership of each master data set—the AP clerk is responsible for entering and updating vendor information
- Develop data governance processes—all new product information requires review/approval from the management team prior to product entry in the accounting and manufacturing systems
- Design necessary tools and workflows—technology can be implemented to help automate approvals and the flow of information
- Deploy processes for maintaining master data—businesses can create templates and enforce procedures to capture requested master data updates
Organizations large and small are faced with data challenges. Occasional focus on cleaning up critical data is not enough. Creating a comprehensive MDM strategy is the starting point to having confidence in company data. Avoid the pitfalls of poorly maintained master data by establishing processes to manage the creation of new master data and enforcing everyday practices to maintain the data over time.
The Importance of Clean Master Data Before ERP Go-live
by Mary Critelli
Companies spend millions of dollars to purchase and implement new ERP systems in the name of process improvement and efficiency. Yet many companies do not put the necessary time, effort, and money into cleaning up master data before going live with a new system. Master data is a term used for data objects that are agreed upon and shared across the company (i.e. customers, suppliers, products, and services). Clean master data describes data that is accurate and properly structured within a system.
Going live with unclean master data undermines the ERP implementation in the following three ways:
1. Data input standards only get harder to enforce after go-live
Implementing a new system is an opportunity to start with a clean slate from a data standpoint. Once a new system is live, the difficulty of going into the system and cleaning or fixing master data increases significantly, while the probability of going through this exercise decreases significantly. When purchasing a new car, most people would not take all of the trash out of their old console, backseat or trunk and throw it into the new car. The same logic applies to a new system. It does not make sense to bring in duplicate, inaccurate, or unnecessary data. Take time to go through existing data and make sure it is accurate, mutually exclusive, and collectively exhaustive.
2. Unclean master data prevents users from navigating the system as it was intended
A huge benefit of ERP systems is the way data and transactions are linked. These relationships make navigating the system and finding documents easier. The links also reduce the time and uncertainty associated with searching for documents and analyzing transactions. When master data is not controlled and accurate, the links break. For example, a client’s system contained duplicate vendor names—some written in all caps, some with spaces, some with no spaces—because processes and standards around master data maintenance were lacking. On more than one occasion, this client paid a vendor twice for one AP invoice, once to one vendor and once to a duplicate of that vendor. Imagine the cash flow nightmares companies have to deal with for something that can be fixed so easily.
3. Accurate and timely reporting may not be readily available to management
The most frequent complaint about legacy systems is that management cannot trust the output. Most reports from legacy systems are Excel-based and undergo a lot of manual manipulation, leaving room for keystroke errors. The purpose of implementing an ERP system is to get operations and accounting data in one integrated system so information can be pulled in real time for reporting. The reporting tools in today’s ERP systems are extremely powerful and eliminate the need for manual manipulation. However, the quality of reports is only as good as the quality of data.
The more work done on the front end to organize and cleanse master data, the more functional and accurate the reporting is. Trenegy starts every implementation with a data model and reporting strategy. By creating a blueprint of the reports a company expects from the ERP, software developers can build fields that will capture the right data from the start.
Master Data Management: The Good, the Bad, and the Ugly
by Nicole Higle
When Master Data Management is brought up as an action item on the company to-do list, it’s often met with a room full of heavy sighs and groans. Management teams understand that not having an MDM process has negative effects, including severe delays in daily processes, lack of confidence in master files, and uncertainty in providing regulatory reporting. However, nobody jumps at the chance to put one in place.
In an era where mergers and acquisitions are on the rise, organizations must put in the time to get MDM in check to minimize the impacts of unreliable master data.
The Good: You Want Reliable Data? You Got It.
Undoubtedly, an MDM process makes life easier. MDM encompasses the procedures and technology put in place to maintain and protect the integrity of critical data sets. A well-defined process helps facilitate the distribution of shared attribute data between source systems, providing one version of the truth. Quality control checks embedded in data management practices instill confidence in data used by the business for day-to-day operations. They ultimately minimize the confusion that comes with data inconsistencies.
Proactively managing master data also has a positive, direct impact on company reporting. Having access to reliable and timely data reduces hours spent researching data idiosyncrasies in order to provide accurate asset counts and regulatory information.
The Bad: Managing Master Data Is Not Free or Easy
While there’s no question regarding the benefits of MDM controls, investing in and developing a process is not free. To realize results, businesses must create a well-defined process for the entire lifecycle of the data sets being used. Additionally, they must budget for a dedicated team of resources to manage the flow of data. This team should be carefully selected as it is critical they are familiar with all data elements.
After solidifying processes and selecting an MDM team, master data files must be analyzed and cleansed. This exercise can be painful and time consuming depending on the state of the data, but it’s necessary to see positive results. Custom built reports can be created to quickly identify data discrepancies for cleanup.
It is important to note, even after all master files are scrubbed, data gaps will be an ongoing issue. Sometimes specific data points, such as legacy effective dates and pre-acquisition contact information, are not available or do not exist. Implement an MDM organization now to avoid creating unnecessary data gaps in the future.
The Ugly: Not Managing Master Data Is Worse Than the Difficulties of Managing It
For organizations operating without a formalized way to manage company master data files, getting information is ugly. Think of unorganized vendor files causing payments sent to incorrect addresses and contradictions in product codes living in multiple systems. Without a clean foundation of master data and a process to manage updates, day-to-day business operations become sluggish. The lack of confidence in data often results in users working in offline, self-managed workbooks.
Too many organizations struggle to conduct standard business processes due to the lack of access to reliable master files. This issue is only magnified as acquisitions are made and customer bases grow.
Regardless of industry, all organizations are susceptible to the nightmare of disorganized master data. If businesses identify lack of critical data as a pain point now, it will only become more problematic as data grows. Stop perpetuating the cycle of bad data and implement MDM comprised of a robust process, clean master data, and a team that knows what they’re doing.
3 Ways to Prepare for Master Data Management
by Wesley Cooper
Master data management can mean a variety of things depending on who you talk to. The root of master data management is ensuring data is entered correctly the first time and shared consistently across the organization. For example, the land department for an E&P company enters the land records one time and the records are pushed out across all other systems, ensuring accuracy. Many companies use systems to help manage master data. However, before taking on large amounts of expenses associated with purchasing a system, a company should take three simple steps to prepare for master data management.
1. Define the data governance model
The governance model establishes rules around identifying what a company’s master data is, as well as who owns, uses, maintains, and holds responsibility for accuracy of data. Robust data governance allows for clear lines of communication and accountability across all key data assets.
For upstream oil and gas companies, data related to the well life cycle process comes from a variety of departments and is often changed throughout the process. Without strong data governance, the well master data can quickly become duplicated or inconsistent, resulting in unreliable data.
With data governance established, companies can begin examining the data to ensure accuracy, cleanliness, and the ability to perform what is required of it.
2. Clean up the master data
Companies capture and produce enormous amounts of data. Data is often captured in different formats and some data may be unnecessary. It is important to identify resources who are knowledgeable across key data assets to comb through and determine what data is necessary. Defining critical data points enables the company to eliminate non-value adding data. Master data cleaning pays dividends in implementing master data management, future system implementations, and instilling confidence in users of key data.
Trenegy has worked with a wide range of clients with varying levels of maturity in master data management. Clients who prioritized data cleanup achieved higher confidence in analytics-based decision making and reduced the preparation time for system implementations.
3. Conduct performance process pilots
By conducting performance process pilots, a company takes on a small-scale effort to manually test data collection and consumer delivery at each stage of the data lifecycle. Appropriately testing the process associated with initiating, maintaining, and retiring master data helps ensure the data will be compatible with future tools to help automate master data collection and delivery.
This is where businesses often put the cart before the horse, selecting a tool and resources before completing necessary process preparations. Selecting a tool too early burdens companies with immense rework due to compatibility issues between the master data, collection methods, and automation tool.
Master data management is necessary in today’s environment of exponentially growing data and complex system architecture. However, the effort to achieve master data management does not have to be as arduous as it seems. By defining data governance models, cleaning master data, and performing pilots, companies gain a solid foundation for effective master data management. The combination of these three components provides reliable data and supports streamlined processes with far fewer variables for future technology projects and system upgrades.
Well Lifecycle Administration: Like Managing an All-you-can-eat Buffet
by Gracilynn Miller
Managing the administration of a well’s lifecycle from inception to abandonment is a complex process—not unlike managing a Chinese food buffet. Not all patrons select the same items in the same order, and eager diners jump around the buffet line for seconds and thirds. Keeping the buffet stocked with fresh food is a challenge. Likewise, the combination of landowners, multiple parties in a joint venture, take-in-kind owners, changes in property ownership, and other complexities make administering a well’s lifecycle difficult. Add third-party production, revenue and billing data, consent and non-consent partners, and billing out contracted field services, and the situation becomes daunting. It’s a bit like managing an all-you-can-eat buffet.
Many of the well lifecycle administration challenges are external to the organization and cannot be changed. However, the internal challenges are significant and can be addressed to achieve efficiency. Each activity in well lifecycle administration is owned by different departments in the organization who will assign multiple attributes to the wells in different systems. The attributes provide information about the well such as the well ID, well name, location, production status, spud date, completion date, and acquisition date. The handoff between departments is often unclear and the definitions for the same well attributes become inconsistent across systems. The inconsistencies negatively impact cost tracking, production tracking, and revenue allocations.
Leading exploration and production companies respond to the complexities within well lifecycle administration processes by addressing three key areas:
1. Defining accountability
Unclear accountability is the root of much frustration in the well lifecycle administration process. Leadership should collectively agree on each function’s accountabilities and responsibilities within the process. A RACI (Responsible, Accountable, Consulted, and Informed) is a very effective tool to communicate roles and responsibilities and to define activity ownership. One of the first areas our clients address is aligning the planning function in the organization. Aligning development, lease operating expenses (LOE), production, and general and administrative (G&A) planning allows an organization to streamline data flow and reduce the planning cycle time.
2. Developing clear processes
Organizational accountabilities should be supported with clearly defined business processes. Standard processes improve a company’s ability to address issues arising from external parties and ensure consistent well lifecycle administration. Process flows for data entry and approval should be documented, clarified, and streamlined for every step of well lifecycle administration. Moreover, compliance monitoring is critical to ensure employees follow the policies and procedures. A high-impact area where our clients begin is clearly defining the process of managing and communicating the creation of and changes to the revenue decks between land and revenue accounting.
3. Integrating tools
Exploration and production companies implement a variety of specialized tools to support the business, typically resulting in a best-of-breed environment. Revenue accounting and joint interest billing may be on a common accounting platform, yet marketing, production, land, and drilling functions may be supported by independent, function-specific tools. Our clients have built technical data management and workflow capabilities to bridge the systems and automate the synchronization of the business processes through a centralized hub. Key information required to support efficient and effective well lifecycle administration is gathered and distributed in a timely manner through a set of integration tools. The integration tools require the alignment of department accountabilities and processes before tool implementation.
The optimal solution for streamlining the well lifecycle depends on the size, organization structure, and growth strategy of the exploration and production company. While there’s no one-size-fits-all solution to the well lifecycle administration challenge, Trenegy has successfully worked with various exploration and production companies to identify and implement fit-for-purpose well lifecycle process improvements.
Making Sense of Big Data
by Peter Purcell
Major League Baseball implemented “Statcast” in 2014 to provide each team with seven terabytes of data recorded by radars and camera. To put a terabyte into perspective, an average Excel spreadsheet can manage four gigabytes of data. That means at least 1,750 Excel workbooks would be needed per game to house all of the recorded data. Multiply that by the ungodly length of a season—162 games—and you arrive at more than 280,000 Excel workbooks for each team.
The scientific name for this problem is “too much data,” and without a solution, the plot of Moneyball 2 would be Brad Pitt sitting in a dimly lit room yelling four letter words at his laptop for the entire 90-minute movie. Asking the right questions with powerful algorithms transforms too much data into big data. Baseball teams, like many industries, have adopted this approach in the never-ending pursuit of more accurate decision making.
Do not let the consultant-ese gobbledygook that surrounds this concept confuse you. Big data is just data. Anyone who has broken an Excel sheet dabbles in big data, because big data describes data sets that are so massive that our current forms of processing (e.g. Excel) are incapable of making sense of them. To properly define the term further, we should assess how it is produced, how it should be used, how it should not be used, and its impact on our lives.
We are all data producers. Our smartphones track our location and provide app makers with second-by-second information. Online marketplaces record all of our clicks, and even the clicks we do not make. Surprisingly, the largest source of data in the world makes up who we are. Well-known scientist and data expert Riccardo Sabatini refers to pregnant women as the first “3D printers … assembling the biggest amount of information that you will ever encounter.” Each person’s genome fills 262,000 pages of text.
Now that’s big data.
Making Sense of Big Data
Actually doing something with Big Data is a completely different challenge. IBM’s Watson uses machine learning/AI to try to make sense of all of this information. Other data processing technologies are sprouting up, claiming they can improve business performance by leaps and bounds. Nevertheless, the critical ingredient in the big data stew is the human touch.
Google’s Director of Research, Peter Norvig, famously said, “We do not have better algorithms. We just have more data.” A data-oriented company is no longer run by the highest-paid person’s opinions, but rather that person’s questions. Without the right questions and analysis, big data is a pile of useless garbage at best, and at worst, it’s harmful. Just as statistics can be manipulated to support conflicting viewpoints, big data can result in spurious correlations. For companies assessing the data that’s valuable to them, it’s important to set standards for what that data represents. This eliminates confusion, useless analytics, and false inferences.
Another major problem is the revenge of the nerds. Teams of data scientists are attempting to solve questions that have non-scientific answers. For example, scientific precision cannot be used to make judgment calls, like ranking the most important or best something of all time, and it struggles when assessing cultural decisions like hiring and building teams. Big data needs big judgment to work.
Big Data in Real Life
The real life applications of big data are intriguing and slightly Orwellian. Macy’s can project their Black Friday revenue based on how many mobile phones are in their parking lot. Amazon has patented “anticipatory shipping” which ships an item before a member knows they want it, based on an algorithm. Predictive policing uses analytics to send law enforcement to locations before crime happens.
In the business world, big data enables companies to better assess risk and develop products or services based on consumer preference. Retail brands analyze and then predict customer preferences. Manufacturing companies read sensors on machinery and apply production schedules to anticipate equipment maintenance and replacement. Franchises determine locations for storefronts based on data concerning demographics, traffic analysis, and consumer behaviors. Exploration companies, before they drill, gather and assess millions of records regarding both the presence of oil and gas and its extractability.
However, the sheer amount of data proves daunting for many corporations. What data is actually helpful to the bottom line? Once that’s determined, how can it be used?
With the improvement in data storage capabilities and processing technology, the term big data may soon disappear, and information will become “data” once again. Data science has and always will require the right question and human analysis to be useful.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
Reporting Strategy: More Than a Slimmed-down Report Stack
by Brenna O’Hara
Our research indicates management in large organizations can spend up to 50% of time developing, modifying, and reviewing reports. For a company with more than $1B in revenue, the quantity of reports can stack higher than the Empire State Building. In many cases, the general perception is that more information is better. However, too many reports can have an adverse effect on overall efficiency.
The problem of excess reports is more common in publicly traded companies with complex structures, multiple lines of business, and global operations. Organizations should seek to eliminate unnecessary reports and focus on enhancing the value of remaining reports by following three simple rules:
1. Standardize data definitions. The root cause of inconsistent data definitions begins at the bottom of an organization. Business functions tend to act in silos when defining metrics and reports for analyzing the business. Employees are more concerned with supporting their own responsibilities instead of seeking to understand the other business functions. For example, operations classifies a hose and a coupling as two distinct products categorized in two separate product lines. At the same time, the commercial team classifies the combined product, a hydraulic hose, in yet a third product line. Inconsistent product line definitions force the operations and commercial management teams to create two reports to account for the difference in production and sales numbers. A company must develop a cross-functional team to create standard data definitions and a global data model with consistent dimensions for analyzing the business, such as geography, division, customer, or product line.
2. Align shared processes. When more than one business function is involved in the same process, such as sales and operations planning, common information is not always leveraged. For example, base numbers for revenue and capacity planning might be calculated differently. The sales team develops a revenue forecast to help the organization understand growth opportunities and the demand planners engage sales and marketing to develop a product demand plan to provide capacity requirements. Additional reports are developed by finance to bridge the forecast gaps. Two separate processes are driving the need to create multiple reports that likely only reconcile with heavy manual manipulation. Alignment should begin at the process level with defining a standard, integrated process for shared information, communicating the changes, and implementing policies and controls to ensure the new process is followed.
3. Challenge reports. Often, a one-time, ad-hoc request for information becomes an institutionalized report and is added to a formal reporting process. Employees fall victim to the I’ve-always-done-it-this-way syndrome and mindlessly create the same report over and over without questioning the value. Time and resources could be focused on value-added but are often wasted. For example, the Treasury organization created a daily cash position report distributed to more than 100 managers. The value of the report was questionable. The Treasury manager decided to test the value by not sending the report for a week. Nobody complained of a missing report. If the purpose of a report cannot be explained and the report is not useful for business decisions, stop creating it.
How Your Reporting Defies Statistics and How to Fix It
by Erika Clements
In a 2014 study, psychology and statistical analysts found that people felt more confident in decision making with fewer options. Conversely, decisions were more difficult when faced with a large number of options. The study proved the less-is-more concept works. This concept also applies to metrics. Imagine convoluting the decision making process with an abundance of metrics. It is important to develop key reporting capabilities to provide a window into key business drivers while not overloading management with unnecessary reports and metrics. Management reports are essential to company success but can become a burden to everyone if the data is not captured at the appropriate granularity.
How does a company successfully achieve the balance between quality reports and quantity of information?
Less Is More
One of the key elements of building effective management reports is boiling down a large amount of information to what is critically important:
1. Define the company goals and growth plan. A company focusing on a couple lines of business and striving to grow organically will require much more granular data into each line of business. However, a company planning to grow through acquisition, taking on new lines of business with each new acquisition, must prioritize flexibility and scalability within the reporting.
2. Determine the biggest drivers of cost and revenue. From among the biggest drivers, it is also pertinent to consider which are most variable. Understanding slight variances that have significant cost effects enables companies to make strategic decisions regarding times of the year and locations, allowing the company to offer certain services to drive costs down and increase margins. Once the key drivers have been identified, get rid of unnecessary data and reports by answering the following questions:
- Will the information materially impact our results?
- Will the information change over time or remain constant?
- Is the information relevant to our stakeholders? Does it provide insight?
- Does the information provide predictability into future indicators of success?
Quality in and Quality Out
Regardless of the business systems being utilized, if quality information is not being entered into the system, the resulting management reports will yield false or misleading information for decision making. The following steps can help ensure quality data:
1. Ensure business processes align with updated reporting requirements. For example, if a company wants to begin tracking the profitability of certain assets, field operators would be required to create asset numbers, tag assets with the appropriate numbers, and capture when the specific asset is used, repaired, or relocated. If new reporting metrics are identified without rolling out corresponding updates to business processes, inaccurate reporting will continue to plague the company. There are not enough systems, precautions, or automations to yield desired information without effective business processes, enforcement from management, and participation from line-level employees.
2. Standardize input fields. Free-form or open input options often result in inconsistent data entry. For instance, one employee enters Houston, another enters HOU, and another enters HTX for a billing location. The information should all be associated with Houston, yet when filtering, the HOU and HTX will likely be left out, resulting in reporting inaccuracies. A way to mitigate this common problem is to utilize drop down with preselected data rather than free-form boxes.
3. Utilize cross validation rules. Cross-validation filters data to prevent coding errors. For example, if wireline services are only offered in Texas, an employee who inputs their location as Colorado should not see the option to select wireline as a service.
4. Allocate with discretion. Creating allocations for internal purposes provides little to no benefits. Practically, an IT organization may allocate their IT costs to the operating divisions. At the same time, the operating divisions have no control over the allocated IT costs. This additional information becomes meaningless to the operating divisions, and the IT costs become less visible to the organization as a whole. Allocations should be used where there is a legal or customer requirement to allocate the costs.
Management reports should present information managers need to make informed decisions. Abiding by these two simple and statistically-backed principles will give management exactly what they need—and nothing more.
Our Favorite Business Intelligence Tool
by Wesley Cooper
Among a myriad of tools, we have found Power BI from Microsoft to best combine data into useful information. We have seen many clients achieve success with this easy-to-use, cost effective, and flexible tool.
Ease of Use
Easy-to-use tools are more effective due to quick adoption. We have seen clients with no prior IT experience learn Power BI within a few days.
Power BI provides an Excel-like interface and functionality, allowing users to work with what is familiar. Drag-and-drop functionality allows users to make reports easily. Best, extracting information from commercially available ERP systems is no longer a multi-year IT project. Microsoft has created standard interfaces so data can be extracted from a variety of sources.
Cost
Traditional BI solutions are costly to implement and support. External consultants are required to configure the software and end users constantly need training. Power BI offers a free version with extensive online training to eliminate the need for a consultant. A license is only required if users share dashboards or distribute reports.
Flexibility
Reporting tools typically cannot be easily modified as business needs change. Power BI allows users to quickly change a data model anytime the data structure changes. Reports can be easily manipulated in a matter of minutes to match business needs. The speed and frequency at which data can be loaded is up to the user, making reports more timely and accurate.
Trenegy does not have any interest in Microsoft. Our recommendation simply stems from our assessment and our experience with clients that have successfully implemented Power BI in a self-taught environment. As we aim to eliminate inefficiencies in companies, we believe Power BI is an excellent option to streamline reporting and drive results.
Actionable Analytics
by William Aimone
The alarm goes off and the TV is flicked on to the news channel.
- Traffic report: A massive wreck on the highway on the normal route, causing a forty-five-minute delay—probably a good idea to take a different way.
- Weather report: Heavy afternoon thunderstorms—might need to pack an umbrella and rain jacket.
- Down to the last bit of coffee creamer—looks like a trip to the grocery store on the way home.
Wow, not even an hour into the day, and already at least three actionable decisions have been made.
Recently, some consultants have coined the term “actionable analytics” as a new buzzword. Actionable analytics is the concept that a person or company can analyze timely and relevant data about specific needs, and from that analysis develop a series of strategic actions.
The three morning decisions above were made based on analyzing data and taking action. The weather reporter analyzed data about atmospheric trends and provided information to take action—pack an umbrella. Super fancy term for everyday judgment calls, right? Same thing goes for companies. “Actionable analytics” is marketed to businesses to help consultants and software vendors sound smart.
Really, the term is stating the obvious.
Do all companies want actionable analytics? Of course. At the expense of insulting whomever coined this term, why on earth would anyone want to analyze information they cannot take action on? Shouldn’t all analytics be actionable? It should go without saying that a successful company would not waste time and resources doing analyses unless it is useful and helps drive strategic actions.
The following table describes ways to respond when confronted with the use of the term. Typically, “actionable analytics” will be introduced by one of three personas: a salesperson, an IT specialist, or an analyst.
Actionable analytics should not be used to categorize reports as valuable, but rather all reports from a company’s system should be valuable. When data outputs from a system are no longer useful, companies should look to improve upon their existing ERP.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
The Future of Business Intelligence
by Todd Boutte
Business intelligence (BI) is a simple concept. It involves 1) collecting data pertaining to your company from internal and external sources and 2) finding a way to distill it into something actionable. Essentially it involves harvesting the data you need to make good business decisions.
Today, the term “business intelligence” usually refers to the software or tools organizations use to turn data into usable information. It’s come a long way in the last 10 years, and with the recent growth of artificial intelligence, BI tools have powerful potential.
How Business Intelligence Has Changed in the Last 10 Years
In the last decade, companies have consolidated their business systems around a few key players (Microsoft, Oracle, SAP). Microsoft stands out because they’ve built a product into an ecosystem that thousands of organizations use every day. Microsoft 365 developed Power BI as a stand-alone product within the last 10 years.
More than 10 years ago, tools were difficult to use. They required people with specialized skill sets to write code and gather data to turn it into usable information. Microsoft, however, as a leading data architect and office productivity company, has key contributions in the simplification of business intelligence. With Microsoft’s Power BI, companies no longer need an army of database administrators and developers to handle data. It has become more of an intuitive, self-service business intelligence platform people can use themselves.
The Future of Business Intelligence Is Artificial Intelligence
Since Microsoft is a key player in this industry, they’ve already included some basic AI tools within Power BI. One of those is a Q&A box that can be included in a report to make it easier to find information. A user can ask, “What was the revenue in Q1 of 2022 vs. 2023?” Power BI will do its best to pull that information.
As Microsoft continues to develop AI capabilities, we expect users will be able to ask even more complex questions and follow-up questions, just like with ChatGPT.
We also expect AI to be able to examine a set of data and make inferences based on that data. For example, suppose a company needs to review customer feedback on a product line but has 10,000 customer reviews. That’s a lot for one person to parse through. Instead, there’s potential for AI to step in and find common threads with language without having to spend hundreds of human hours looking through reviews. Instead of merely taking a 5-star review at face value, AI could analyze what was said about the product—because a 5-star review isn’t always meaningful if the text says otherwise.
It’s important to note that, no matter how advanced AI is, organizations shouldn’t fully rely on AI to extract data and make inferences. Human intelligence will still be required to manage AI tools and make sure they’re pulling the right data, making accurate inferences, and interpreting language correctly.
AI is all about saving time and allowing employees to add more value to the organization. As AI advances, it will alleviate a lot of time-consuming activity and allow employees to focus on the strategies and conversations that will drive business decisions.
The Right Mindset for AI
Remember, AI is a tool. It’s not a decision maker or a business strategist. While it can replace a lot of human tasks, it doesn’t replace a human. The people using AI tools are the key to making AI tools successful. The right tool in the wrong hands won’t solve anything. But if used correctly, AI has the potential to add significant value to organizations.
A Word on Best Practices
When it comes to managing data the following practices are crucial, with or without AI.
1. Establish Good Governance Around Data
Setting standards around creating data is key. We’ve seen companies that have multiple people entering the same data in their system under different names (e.g. GE, GE Power, General Electric). When someone asks to see information for GE, the data isn’t accurate. Organizations must have good governance and data ownership on the front end so information is centralized.
2. Define & Agree on Metrics
It’s important to agree on and define which metrics are to be tracked. Know what’s included in each metric and what’s not. If people realize data metrics aren’t consistent or correct, they won’t believe the data. They’ll be more likely to create their own databases that are more consistent with the data they need.
Terminology is critical. In many organizations, different departments or divisions within the company have different definitions for the same word. But there shouldn’t be any ambiguity on a well-built report. It’s important to note that BI software or tools can’t solve this problem. It’s about the processes around business intelligence and the people involved. BI software is maintained by people, and the processes for maintaining it must be clear and thorough.
At Trenegy, we help organizations implement fit-for-purpose business intelligence solutions to drive value. For more information, reach out to us at info@trenegy.com.
IT Communication
How to Revitalize the IT Steering Committee
by Peter Purcell
Benjamin Franklin is quoted as saying, “Guests, like fish, begin to smell after three days.” It typically takes less than three meetings for the relationship between business and IT to suffer the same fate within an IT Steering Committee (ITSC). The euphoria of creating an ITSC with quotes heralding a time of new IT/business teamwork to support growth and change is quickly replaced by indifference and apathy.
Why does the business quickly become disengaged in ITSC meetings? Unfortunately, the ITSC meetings quickly devolve, with the CIO doing most of the talking while participants are focused on answering emails or taking advantage of the time by napping. If this is how your meetings are going, it is time to reinvigorate the ITSC.
There are three simple activities to revamp the ITSC and renew the IT-business relationship:
1. Update the guiding principles
This is a one-time exercise that helps re-engage the business. Work as a team to develop or update guiding principles for how projects are identified, selected, and prioritized. The guiding principles need to ensure clear, two-way communication so that IT is not just an order taker. IT needs to be able to ask why a project is necessary and make suggestions for alternative solutions that could be more cost effective. On the other hand, business needs to be in a position to turn down IT suggestions for implementing new, unproven technologies that may not add value. This helps keep both organizations from succumbing to the urge of chasing the newest, shiniest ball.
Additional principles around how to develop and approve IT operational budgets are also critical. While the CIO can take the first stab at updating the guiding principles, the ITSC members should provide input before final approval. This ensures buy-in from all participants.
The updated guiding principles should be clearly communicated across business and IT so there is no confusion when projects are identified, evaluated, approved, prioritized, and executed.
2. Let business do most of the talking
The key to maintaining the right level of interest and participation is to talk about major upcoming business initiatives and the possible impact on IT. Do not dive into technical solutions immediately! If an IT need is brought up, probe to determine how much research business has done to identify a solution. Focus on exhausting process or organizational solutions to solidify an IT need.
Once the IT need has been identified, create a combined business/IT team with responsibility to work through the requirements, system alternatives, and recommendations before the next ITSC meeting. Create a realistic business case with a well-thought-out budget so the business lead can present the results of the team’s effort.
Going through this exercise as an ITSC helps prevent unnecessary IT spend. A new marketing and sales program may not require a multimillion-dollar CRM system. Something as simple as modifying existing reports could suffice.
3. Ditch the boring operational reports
Be careful when it’s IT’s turn to share. Nothing drives a business person on the ITSC to start emailing from their smartphone faster than a jaunt through a series of uptime reports. Adding a series of technical acronyms only makes it worse, pushing most to start thinking about lunch.
Instead, spend time sharing upcoming operational activities that could have an impact on business. Consider the audience and the metrics that are important to them. How does your activity impact those projects, decisions, timelines, and budgets?
One example of a meaningful conversation is upcoming upgrades that could create operational system downtime. Work with the business to coordinate schedules to reduce the chances of major shutdowns. Upgrading the GL during the middle of year-end close is probably a bad idea.
Rejuvenating the ITSC is only slightly less difficult than getting rid of the stench of rotten fish. However, a smoothly functioning ITSC is critical to having IT and business work together as a team to support growth and change. Getting the two working together just takes a little elbow grease… and maybe a little bleach to get rid of the fish stench.
I Don’t Speak IT: How to Get What You Want from Developers
by Nicole Higle
When businesses turn to software developers to modify reports, workflows, and general system functionality, they too often find themselves saying, “It still isn’t right!” The truth is, developers often think in ways that are unfamiliar to those who don’t have a technical background. If there are any holes in receiving development requests, the technical team is left to fill in the gaps by making assumptions. These assumptions are rarely correct and often result in frustration from both sides.
This same principal applies to visiting a foreign country without speaking the native tongue. Google Translate is quick and easy to use, but not 100% effective in conveying the message. Translation tools give literal interpretations, but sentence structures vary across languages and idiomatic sayings are rendered meaningless in word-for-word translation.
Businesses can avoid miscommunication and thus the burden of costly, wasted development time by following these steps:
1. Conduct discovery sessions
Business areas often communicate development requests in passing or between meetings, and a true understanding of the request is lost. Schedule a formal meeting to discuss new development efforts so the development team can fully understand the issue. During discovery sessions, the business should provide a visual of how data is currently displayed and how it should look in the future. Pull in a projector and walk through software and reporting portals to ensure developers understand how data is presented in the user interface.
2. Document business requirements and establish a timeline
Following discovery sessions, the development team should create requirements documentation. Present documentation in a consistent format that outlines the purpose of development requested, business use, general requirements, business rules, and an expected completion date. The business must sign off before development starts. This will ensure the technical team is not left to make assumptions, which generally results in the business paying for wasted development. Signatures show both sides understand expectations for delivery.
3. Keep a development log
A log of development requests allows the business to track tasks currently in progress and mark backlogged items. Finalized modifications should be marked complete in the log and communicated to business users impacted by the change.
Development logs are also a useful reference tool. New team members can refer to the logs to understand what standard software functionality has been enhanced along with business justification for the updates.
4. Perform preliminary testing before releasing to UAT
It is common for development teams to only verify coding changes in backend databases where they are most comfortable working. To be thorough, development teams should also verify that changes are displayed on the front end where information is available to the business. Check for updates in both places to mitigate the risk of having users test updates before they are ready.
5. Require business signoff to promote updates to a live environment
After coding updates have been verified, they should be released to a clean testing area where the requester(s) perform User Acceptance Testing. During this time, the business will run through test cases and scripts to ensure the updates do not impact the full end-to-end process. The assigned tester should provide proof of the successful test run, which will indicate that the fix is ready to be promoted to the live environment.
Working with a development team can be difficult for non-technical people. IT vernacular can be as intimidating as a foreign language, but running through these simple exercises will eliminate wasted development efforts. Trenegy works with businesses and development teams to smoothly manage large-scale system implementations.
How Inconsistent Terminology Hurts Your Company
by Rachel Claggett
Eighty-eight percent of companies that actively manage their terminology reported a 74% increase in product quality (per a 2016 survey by the Local Industry Standards Association).
Consider an American eating at a restaurant in the UK. They order chips, but are surprised to receive french fries instead. Like nations, companies—and even departments within companies—have unique terminology that can cause confusion. Employees who move to a new company or department deal with inconsistent or poorly defined terms and often end up frustrated. The intracompany “dialects” can be confusing and counterproductive. How can companies clearly define their terminology to increase productivity and save money? Consistent and well-defined terminology is especially important in three major areas:
Performance Metrics
Companies create measurements and define standards of success. To accurately track performance, performance measures must be understood across the company. In one instance, division managers were told that performance would be judged based on lease operating costs per barrel of oil produced. Each manager was responsible for pulling together his own report. Here’s the catch: the term “lease operating cost” was not clearly defined across the company. Some managers included property tax while others excluded property tax, considering them to be a corporate expense rather than a local one. Divisions that didn’t include property taxes showed lower operating costs and better (but misleading) results. A lack of clear and consistent definition of “lease operating cost” directly impacted not only performance results, but the company’s perceived financial state, as well. Having a consistent definition of performance metrics and the associated components is crucial.
Policies and Procedures
Companies outline policies and procedures for daily operations of the company, often using corporate language. Employees are expected to comply with these policies, so understanding matters. For example, Management of Change (MOC) is the process by which companies ensure that health, safety, and environmental risks are controlled when the company makes major changes in facilities, operations, or personnel. MOC involves a multitude of forms and activities, all carefully coordinated among all divisions and departments. When every step in that process has a different name or meaning in every division, it causes all kinds of overlap, rework, and duplication. Clearly defined and consistent vocabulary for all the forms, activities, and divisions involved can streamline the process and save time and money spent on all projects.
Strategy, Culture, and Operations
Specific language is used to describe the corporate mission, vision, and values. The mission should be universal for the company, yet many individuals are focused on their own performance and that of their division. Accounting doesn’t need to understand anything but debits and credits, right? Wrong. An accountant who lacks understanding of the core of the business cannot provide value-added analysis. The accountants should be able to help operations understand how their decisions impact financial results. A mission statement claiming that a deepwater drilling company is “top-drive focused” is meaningless if most employees don’t know what a top-drive is. Operations and corporate functional areas should be strategically and culturally aligned behind the core business. Clarity on what the core business truly is has an impact on motivation and bottom line results.
If company-wide confusion exists in performance metrics, strategic communications, or policies, it’s probably time to consider evaluating terminology. The solution can be as simple as providing resources and training materials that unify corporate and industry language. Many companies have put their company terminology on company intranet blogs, allowing employees to post comments and answer questions about the terminology. For example, a services company assigned departmental owners for each term. Employees in other departments could use the blog to ask questions about the term. Having a forum for employees to ask, “What is a top-drive,” or “Are property taxes included in lease operating expenses?” allows a company to become aligned across departments and divisions. For more on this topic, check out our book “Jar(gone),” which describes commonly misunderstood business terminology.
But I Bought a Bowflex! Why Technology Alone Won’t Improve Your Business
by Mary Critelli
The U.S. weight-loss industry totals $20 billion in annual revenue. To shed pounds, people purchase ab crunchers, Bowflexes, and the oh-so-ingenious shake weights. But over time, exercise machines collect dust, clutter yard sales, and consume Craigslist ads.
Many are dumbfounded when they don’t lose their desired weight, saying, “But I bought a Bowflex!?”
Unfortunately, we can’t automate everything. Simply purchasing equipment is not enough. Behavioral changes are needed to achieve desired goals.
We have seen executives fall victim to a similar conviction that purchasing an expensive ERP system will miraculously solve their company’s problems. In hindsight, they realized the company’s processes and behaviors were the root cause of the issues. Processes should have been addressed before committing to an expensive purchase. To avoid purchasing a tool without realizing improvement, executives must first confirm a system is necessary to accomplish desired goals.
When a system is needed, a successful team will not put sole emphasis on technology but will consider processes and people along with the new system. The team takes the time to identify trouble spots, improve routines, and evaluate metrics.
Identify Areas for Improvement
One of the first things a trainer asks is to identify trouble spots, or areas to improve. Not until goals are identified can the trainer begin to provide recommended steps.
A former oilfield services client blamed their lengthy month-end close on their outdated ERP system. However, we quickly found the culprit to be the numerous manual journal entries their accounting team processed to reconcile operational errors occurring in the field. When the company purchased the new system and did not teach new behaviors, all they gained was a system just as broken as the old one. Once the root cause was identified and operations personnel were trained on new processes, the manual adjustments diminished and financial close was reduced by ten days.
Before investing millions in an ERP system, evaluate whether the system is necessary. A system will not fix issues in a vacuum. Processes need to be fixed first.
Change Behavior
Once trainers identify trouble spots and establish associated goals, a strategy can be executed to achieve desired results. Trainees start a workout regime that works best for them and change old eating habits accordingly. In business, old routines have to change to support process and technology improvements.
An oilfield services company growing more than 25% per year performed several system implementations concurrently to support their explosive growth. In the midst of system implementations, the company began experiencing relatively high turnover. The turnover was most likely a result of placing new burdens on already strained resources.
Employees who stayed latched on to the old systems, even though technical support was slowly withdrawn as the systems were decommissioned. They trusted the old systems’ data and felt comfortable with the familiar interface.
To combat the high turnover and create a transition from the old to new ways of working, executives allocated resources to spend the extra time documenting and communicating the new processes. Employees immersed themselves in training classes with assigned homework and follow-up refresher courses. Strong executive sponsorship ensured employees had the support needed to balance both their day-to-day tasks and end-user training.
With effective change management incorporated, employees adjusted their routines to mesh with the new system and adapted their behaviors to the changing business environment. Once employees embraced the process changes that came along with the new system, their jobs became less painful and turnover decreased. In addition, they had documented processes which could be easily transferred to new employees.
Robust change management and standardized process documentation are essential to unlocking the full potential of a new system and changing behavior.
Evaluate Metrics
One of the most essential elements to any successful workout plan is continuously benchmarking success. Personal trainers help clients evaluate metrics, such as pounds lost and body mass index, and adjust routines based on progress reflected in the metrics. Both trainers and trainees are held accountable for reaching their goals and working together to cross the finish line.
Measuring the success of a systems implementation is similarly important. Keep realistic goals in mind and continually benchmark progress against those goals.
Distinguish the critical processes that must be improved to deem the implementation a success and monitor the status of those critical processes to ensure improvements are realized.
One of our former clients, the CEO of a midstream pipeline company, defined one of his critical success factors as achieving an efficient and effective financial reporting process. The measures of success were days to close the financial books and prior period adjustments.
During a fast-paced ERP implementation, his team became overloaded implementing more than just the general ledger and accounts payable modules. He saw that the timelines were slipping. Immediately, the CEO shifted his employees’ focus back to the core financial reporting goals they had established at the beginning of the project.
Implementing general ledger and payables became the priority, while the other modules were delayed until Phase II of the implementation. Keeping a realistic perspective on personnel capacity and making adjustments along the way were crucial components of success.
Conclusion
Trenegy can’t shrink waist lines, but we can help organizations implement changes with newly designed processes and systems. We help companies shed days off close, eliminate the pain and frustration of validating report data, and revitalize the routines that run their business. We help companies go beyond simply buying a new system. We help them incorporate the system into an improved way of doing business.
When clients target their trouble spots, improve their processes, and evaluate their results, they don’t just buy a system. They own it.
When It’s Time to Change, Who You Gonna Call? Changing the Approach to Change Management
by Gracilynn Miller
Imagine spending millions of dollars and a year or more on a system implementation or upgrade project only to have it completely fail because of lack of stakeholder buy-in. This is an all too familiar story for many organizations that do not engage proper change management support.
One critical decision is often overlooked when planning an ERP system implementation project: selecting resources to lead the most critical part of the project, change management. Organizations spend a significant amount of time qualifying the technical resources proposed by the large systems integration technology firms. During this process, the organization mechanically selects the systems integrator to also lead the change management portion of the project. This decision does not always achieve the expected result.
Should organizations use systems integrator resources or a specialized independent firm to lead the change management thread of a project?
There are pros and cons to either approach. Organizations believe they can staff all the external resources required for implementation from one firm, thus having one party to hold responisble. This simplifies budgeting and cost management components of a project because all external resources are provided by one firm. However, using systems integrators for change management results in budget over runs, lack of time and attention to change activities versus technical activities, ineffective resourcing, and tainted guidance.
Getting Your Money’s Worth
During the proposal process, systems integrator change management resources will be factored into the technology firm’s overall budget proposal. However, when budget negotiations begin or constraints arise, change management resources are the first component of a budget to be cut. Most technology firm’s project teams assume a successful transition can be achieved with scaled back change management resources and rely on the technical configuration team to pick up change management activities. The scaled back model rears its ugly head after the implementation is complete. By then, the system integration team is gone.
A few years ago, an oilfield services client’s CIO called us in a frenzy. They had engaged a large systems integrator who was not working well with end users. The change management team was not engaging with the end users and the CIO could not get the systems integrator to develop a training strategy at a reasonable level of detail. We did a quick review of the systems integrator’s work plan and the change management resources had been stripped from the plan. During implementation, the systems integrator shifted change management time to the interface development to save costs. We brought this to the client’s attention and the CIO had some difficult discussions with the systems integrator.
Most recently, we received a frantic call from a client who had completed an ERP implementation using a large systems integrator for change management. The systems integrator stripped training out of the consulting budget, and the client’s internal team had to conduct all training activities. Unfortunately, this caused certain parts of user acceptance testing to be overlooked, which led to a whole series of issues post-implementation, including the inability to close the financial books on a timely basis.
Using change management resources from a firm independent of the systems integrator allows the client organization to have direct line of sight into the change management budget and activities. From the outset of a project, clients know what they are paying for and the change management team’s commitments are clear. There is less of a chance for bait-and-switch issues.
Where Did the Time Go?
The level of attention and amount of time effective change management requires is often overlooked. Many assume a change management initiative can be completed by simply talking to a few stakeholders and crafting some email blasts. Effective change management requires a team to be thoroughly invested in all facets of an implementation throughout the course of the entire project. The change management team will not only interview stakeholders and develop communications but should also evaluate system design and define training. Each change management activity requires consistent resource commitment.
Systems integrator teams assume change management activities don’t require full-time attention. Their resources will often end up splitting time between multiple client projects. Other times, systems integrator resources are pulled into other activities within the same engagement.
Important details related to the impact of system changes are often overlooked or change management deliverable output is rushed. Both result in ineffective change management efforts and, ultimately, a lack of user acceptance.
Dedicated third party resources engaged for change management purposes are solely accountable for the change management aspect of a project. For example, one of our drilling clients leveraged Trenegy as a third-party to lead the change management thread for a large SAP implementation. Our client’s project manager created a separate set of metrics and status mechanisms to hold our team accountable for change management.
Our team had a laser focus on communications, acceptance, and training activities and there was no blur between these tasks and technical tasks. We had no excuses for flaws in executing change management activities, and the systems integrator had no excuses for not executing the technical tasks. Moreover, since our team was separate from the systems integrator, there was no risk for the change management team to be unnaturally pulled away to conduct technical activities.
Service Line Trumps Client Service
Although most systems integrator firms package change management along with implementation or integration services, the two are entirely different service lines within the firm. Clients would like to think the two groups are playing for the same team, but they are not.
Similar to a large company’s Accounting and Human Resources departments, a systems integrator firm’s service lines function independently of each other and report to different divisions or sectors of the firm. When the two service lines are engaged on a project, internal struggles between the two group’s roles and responsibilities arise.
The root of the conflicts is money. The more billable hours a systems integrator project manager can get out of his technical team, the better the systems integrator’s project and service line perform. The systems integrator project manager would prefer to have his technical team conduct as much of the change management activities as possible.
During a recent client engagement where the systems implementer resources were used for change management and technical project components, struggles and competition between the two services lines emerged. The change management practice staffed the project with five resources, two of whom were to be solely dedicated to training material development. However, the ERP project manager determined the two training material resources should be removed from the engagement. The ERP project manager decided the training material development would instead be completed by members of the technical team. While the technical team resources were not experienced nor proficient at training material development, the ERP project manager wanted more resources from their ERP practice attached to the project.
The result was ineffective training materials developed by inexperienced resources.
Utilizing third-party resources for the change management component of a project eliminates service line competition. Resources from an outside firm will not be concerned with ensuring its service line appears in the best light. Additionally, the tasks and responsibilities of external change management resources cannot be dictated by the SI project manager. The SI project manager will not be able to assign change management tasks to the SI team, as these activities do not fall within the SI firm’s project scope.
Who Is Minding the Store?
Change management resources need to have some level of independence and impartiality from the systems integrator team. In other words, the change management team should report directly to the client team and not be under the leadership of the systems integrator consulting team.
Change management strategies, opinions, and concerns should be discussed directly and openly with client personnel to prevent the topic being ignored.
As discussed earlier in this paper, the systems integrator change management resources are subservient to the systems integrator technical or ERP project manager. Therefore, the systems integrator change management resources have a fear of challenging the implementation team when they see a red flag with how the technical team is addressing certain business process requirements.
We were recently brought on to evaluate an ERP project that was destined for failure. Prior to our arrival, the field services client utilized a combined systems integrator technical and change management team from a global consulting firm. During the first round of conference room pilots, several of the district managers caught wind of some new ERP procedures mandating their dispatchers to collect an inordinate amount of data prior to dispatching their service personnel for customer jobs. They foresaw the new ERP design could bring their business to a halt.
As part of our review, we interviewed the systems integrator change management team to hear their perspective. The systems integrator change management consultants only remotely heard of the problem and felt it was not their place to challenge the issue. Moreover, we found the systems integrator project manager would not allow their own change management consultants to participate in the conference room pilots. We did a quick review of the recommended dispatch process and agreed that the new process was not feasible.
Clearly, the client was being led down the wrong path. We found this to be just one example of other recommended technical solutions that were not practical or designed properly for the field services business model. We were asked to take on the change management activities.
During this process, we insisted on being a part of the conference room pilots. We spent considerable time challenging the proposed processes and working with the client and systems integrator technical teams to identify solutions to fit the company’s organizational capabilities.
Change management teams should not be afraid to challenge the systems integrator team’s decisions or question processes. If change management resources become too concerned with upsetting the apple cart, the change management team loses sight of their true purpose.
Conclusion
Managing the implementation of a new ERP or technology solution is all about minimizing risk to achieve success. Clients cannot minimize risk without direct line of sight into the heart of the ERP implementation: change management. With the independent change management model, change management issues are no longer masked, resources are allocated to support the project’s success, and the steering committee has a sounding board for a second opinion when project challenges arise.
Knowledge Management: More Than an IT Solution
by Adam Smith
“Everyone is talking about the weather but nobody is doing anything about it.” —Mark Twain
Organizations recognize that knowledge among team members is an invaluable resource, but many companies have yet to succeed in fully executing a knowledge management solution. The main reason for failure is that knowledge management often, and insufficiently, begins and ends with technology. Corporate executives maintain extensive databases full of information for employees to reference. Worse, companies invest large amounts of money in IT solutions and receive few sustainable results. In reality, knowledge management involves people, culture, and a new way of thinking. Technology is merely an enabler.
Understanding how to implement a successful knowledge management program begins with recognizing barriers to success, which can be categorized as lack of employee involvement, rigid organizational structure, and overdependence on technology.
Lack of Employee Involvement
Regardless of how knowledge management is promoted by executives, it is not considered critical in day-to-day operations. Employees typically come to work with a task list, or an idea of what they plan to accomplish before the work day ends. Because knowledge management is likely not included in this, employees don’t often set aside time for it. Lack of employee involvement is a common cause of failure.
Knowledge is power, or so they say. In today’s business environment, jobs and promotions are highly competitive. Employees often view tacit knowledge as a means for survival, while disregarding the benefits of sharing what they know and learn. Using knowledge to increase personal value often decreases the organization’s value.
Rigid Organizational Structure
The structure of an organization can greatly impact knowledge management, as hierarchies and reporting relationships can restrict knowledge flow. Employees are generally content to work strictly within departmental and regional confines. Company culture also has great influence; an inflexible culture may not encourage the flow of knowledge as an open culture might.
Many knowledge management programs are hastily thrown together and have little input from those who hold the coveted knowledge. Knowledge systems end up as a collection of old information rather than a source of learning.
Overdependence on Technology
Companies often invest money in technical solutions which merely become poor substitutes for personal interaction. Interpersonal communication is the backbone of knowledge management. Face-to-face interaction fosters learning and allows employees to ask questions and brainstorm together. It’s easier to ask for a solution than dig through documents in a database. Technology and social interactions should complement each other, not replace each other.
Knowledge management systems can be hampered by a lack of ownership. A large data repository is great, but if no one is assigned to manage it, data can accumulate and become disorganized. Employees will be less likely to utilize a muddled, cumbersome system, resulting in a stagnant database.
Steps to Success
- Define the knowledge components that are critical to the organization’s success and determine which key business processes enable knowledge management. Then, map the flow of existing critical knowledge.
- Identify organizational roles for each knowledge management component. Individuals and teams in an organization will play varying roles, from knowledge source to knowledge receiver.
- Advocate the importance of knowledge management programs. The level at which these programs are supported by leadership will affect how employees respond. Employee buy-in is the tipping point between success and failure.
- Integrate knowledge management into all business processes and operations. This can take time, but it ensures knowledge is continuously shared and improved upon. Knowledge management is a journey, not a quick fix.
- Identify where high-risk knowledge leaks might occur. Seasoned employees who plan to retire in the near future are a common source. Apprenticeship or mentoring programs will help cut down on knowledge gaps as people retire or leave the organization. Knowledge management techniques have been successfully used by trade organizations for centuries. How else would artisans and stonemasons pass on their knowledge?
Ultimately, companies want the knowledge and experience developed over many years to stay within their organization as new people are hired and others leave. To achieve this, leadership must view knowledge management as a holistic effort requiring significant process and culture change.
Demystifying IT Jargon
Leveraging Digital Pricing Solutions to Drive Revenue
by Joseph Kasbaum
Dynamic pricing, or charging based on a customer’s willingness to pay, was once considered an airline gimmick or price discrimination at worst. The public consciousness’ shift on dynamic pricing is exemplified by ride-sharing behemoth, Uber. Dr. Steven Levitt recently analyzed Uber’s surge pricing model to determine the UberX demand curve. In the working paper developed with several other economists, Dr. Levitt noted that even with Uber’s demand-based surge pricing, they were leaving $6.8 billion on the table in consumer surplus. Uber’s pricing strategy may need tweaking to capture some of those billions, but the preliminary results show that customers accept and appreciate a demand-driven dynamic pricing model, and the inflexible pricing model that taxis use is woefully outdated.
Not every firm is Uber. However, any company can rethink their pricing strategy to drive revenue. The development of powerful price optimization and management (PO&M) tools has provided our clients with top-line growth. Using a digital PO&M solution allows companies to automate and facilitate the inquiry-to-order process, creating actionable data about purchasing behavior. Unfortunately, the immaturity of the market is leading to consistent pitfalls in implementations. We at Trenegy have consolidated the struggles we often see into four key considerations that organizations must know before purchasing:
Understand Complexity and Timing
Implementing the right PO&M solution is a complex undertaking. Most firms have experienced the challenges of implementing ERP and other EPM solutions, and those challenges are often exacerbated by a lack of planning. Similarly, PO&M implementations require a robust planning phase to prepare the data integration and the underlying algorithms which enable dynamic pricing. The firms that achieve the least amount of value from their digital pricing solutions are the firms that expect it to be plug-and-play. Because of the solution’s complexity, companies must invest in training, change management, and defined roles and responsibilities to roll out a PO&M tool effectively.
Know Your Data
The process of streamlining and cleansing data required for pricing analytics is often the most painstaking and time consuming activity of an implementation. Ensuring this is done correctly prior to go-live is paramount. All of our manufacturing and distribution clients have similar issues with their master data. Common pain points include ERP data that doesn’t align with business intelligence data, sales team price lists on outdated Excel sheets that don’t match contracts in the database, or budgeting tools that have different product roll-ups than the S&OP tool. Because PO&M systems pull from a variety of sources, key rationalizations must be made during the project’s design phase:
- What data does the PO&M system need? At what level?
- Which sources have this data? Will it be integrated?
- Have you reconciled the source data at all levels?
Design a Strategy
Our clients seek a PO&M tool because their current cost-plus, absorption, or contribution margin pricing strategy isn’t capturing enough value. Dynamic pricing is a powerful business concept that can help grow revenue by up to 10%, but purchasing a PO&M solution without a defined dynamic pricing strategy is like leaving the engine out of your Uber LUX sports car. What drives the tool is the consensus between finance, sales, and operations: what will they charge for each SKU in each region while accounting for key dynamics like supply/demand, customer base, and sales promotions? All of these factors develop the algorithm that is the foundation of the PO&M solution.
Manage the Organizational Impact
Effectively implementing a standardized pricing strategy and solution relies on change management initiatives. Throughout the project, the team must develop clearly defined roles and responsibilities, process flows, and policies to align the customer-facing functions. After the tool goes live, the sales and marketing groups need to work together to leverage their capabilities to generate greater revenue.
Cryptocurrency: Gambling or an Investment?
by William Aimone
During lunch meeting, I mentioned my investment in the Ethereum crypto currency was growing in value. My business partner shrugged it off and avowed, “Investing in crypto currency is just gambling!” A sudden rush of guilt over came me and I remorsefully repented.
Well, the last part didn’t actually happen, but the comment made me think.
Gambling is defined as risking money on an event’s uncertain outcome with the intent of winning money. By the broad definition, investing in the stock market, real estate, baseball cards, coins, and Norman Rockwell plate collections would all be considered gambling. The thin line between pure gambling and investing is the value creation element.
Gambling is a zero-sum game when there’s a clear winner and loser in all cases. No value is created in gambling.
When investing, value is created. An individual purchasing stock allows a corporation to use the invested funds to build factories. The factories enable a corporation to grow earnings and increase the value of the stock. Both the individual and the corporation wins. Value is created. Therefore, investing is not really gambling in the true sense of gambling.
Is investing in cryptocurrencies such as Bitcoin, Ethereum, Litecoin a zero-sum game? Are there winners on both sides of cryptocurrency? Is value created from cryptocurrency investments?
Speculation Is Part of Investing
We can hypothesize the intentions behind the founders of cryptocurrencies. Were they merely intending to propagate illegal trade? Were they intending to end the stranglehold the banking system has on commerce? Were they frustrated government controlling our economy using politically motivated fiscal policies? Were they seeking to eliminate barriers in international trade?
Irrespective of the intentions, investors are speculating value into cryptocurrencies. It’s no different than investing in unprofitable platforms like Facebook, Uber, and LinkedIn during their IPOs. Investors speculated future value and eventually profitability resulted. This was an investment because the speculation was a win for all involved.
Value Equals Disruption
Warren Buffett was recently quoted as saying, “In terms of cryptocurrencies, generally, I can say with almost certainty that they will come to a bad ending.” 37% of his $178 billion Berkshire Hathaway portfolio is invested in bank stocks. Does he see cryptocurrency as a potential disruption to his bank portfolio?
Nonetheless, investors in Bitcoin and other cryptocurrencies are speculating future value largely due to the potential disruption of banking systems. There is value in disruption. Anyone claiming no value in disruption needs to give away all their belongings and go live in the woods for a few weeks. Disruption creates value by allowing people to do things in a more convenient way. Will Bitcoin allow us to more conveniently purchase our wines directly from the French vineyard? Will Ethereum replace the poorly designed and managed euro?
This speculation is no different than an oil company speculating a drilling investment off the coast of West Africa or a pharmaceutical company testing a new cure for cancer. If the oil company is correct, they will supply millions of people with much needed energy to heat their homes. If the pharmaceutical company is correct, millions of lives will be saved. If cryptocurrency investors are correct, millions of people may be given an alternative to our current inefficient government run banks.
I can now sleep at night and avoid going to confessional with my small investment in Ethereum. Same with my investments in Amgen and Chevron.
Blockchain Defined: 5 Myths or Facts?
by William Aimone
How often do we hear of a new technology claiming to change the world as we know it? Remember when the Segway was going to transform how cities would be designed? Or Theranos was going to revolutionize the medical industry and prevent disease? Or the utopian Hyperloop would transform travel between cities?
Today, it’s blockchain. “Blockchain will be the next internet.” “It will completely change the financial industry.” “It will replace ride-sharing platforms.” We’re hearing it all. Is this hype or reality?
What Is Blockchain?
Blockchain is a new technology architecture, enabling secure and direct transactions between people. A transaction can be a simple exchange of money, a sequence of rhythms, or signatures on a real estate contract.
Today, most transactions are performed (and guaranteed) through a centralized clearing house or governing organization. The central clearing house or governing organization collects fees along the way. Visa or MasterCard collects purchase transactions from consumers, charges a fee, and distributes the remainder to the merchants. Similarly, music companies collect acoustic creations and distribute it to consumers, collect the fees, and then distribute what is left over to the artists. These centralized clearing houses, or middlemen, are often considered an annoyance to the suppliers (merchants and artists). Credit card merchant fees eat into already razor thin merchant margins. Artists feel as though there are not getting a big enough piece of the pie.
Imagine if the merchants and artists decided to transact without the man in the middle. Nordstrom exchanges shoes for fur pelts instead of central bank issued dollars. Jay-Z sells “99 Problems” directly to millions of consumers. It would be the wild west all over again!
But wait, what if there were a means to facilitate the direct exchanges without the wild west, and within a controlled, secure environment? Herein steps the blockchain revolution. Blockchain provides the security and structure of a centralized clearing house while allowing for the direct exchange of goods or services between a supplier and a consumer. Blockchain is a distributed technology where there is no central technology hub (well, sort of). Consumer and supplier information is maintained locally on distributed computers. And the beauty of Blockchain is the way it copies and stores data between the distributed computers. Certain information is shared between the consumers and suppliers, while information other remains private. Furthermore, blockchain technology proposes to be more secure than the traditional centralized model.
A simple way to explain blockchain technology is the game of go-fish. The blockchain is the deck of cards, and the block is a single card. Each player is a node in the Blockchain, the dealer is the wallet provider, and the cards each player holds are a unique sequence of blocks. Each player’s hand is held privately until someone wants to validate a transaction. The first player reveals one block of information, “Jimmy, do you have a 7?” Jimmy validates the transaction with a match by handing over his 7 of hearts to the first player. The block has now been added to the first player’s chain.
Myth or Fact: Blockchain is completely decentralized
The most popular example of blockchain technology is the Bitcoin currency. Bitcoin transfer is conducted directly from the consumer to the supplier. The money is not transferred through a central bank or clearing house. However, it is somewhat deceiving to say Bitcoin is completely decentralized. The transactions are decentralized, yet the management of the currency exchange rates, the software applications, and people managing the exchange are centralized. For example, Coinbase is the digital asset broker facilitating the exchange of Bitcoin. They are headquartered in San Francisco where they manage the trading, software, and support of the exchange. Therefore, claiming blockchain is decentralized is not completely true.
Myth or Fact: Blockchain eliminates transaction fees
The adage, “There’s no such thing as a free lunch,” applies for blockchain. The blockchain’s flagship Bitcoin is not free. To purchase Bitcoin, one would need to engage a brokerage firm such as Coinbase. The brokerage firms charge a transaction fee when purchasing Bitcoin. Exchange rate arbitrage results in additional “fees” when trading Bitcoin. Blockchain applications require someone (aka a “wallet provider”) to build and support the software intelligence. The building and maintenance of the blockchain applications require people to be involved who wish to get paid for their work. Someone must pay the piper.
Myth or Fact: Blockchain will turn the banking industry on its heels
Whether you are a fan of our current banking system or felt the bailouts were frivolous, the banking industry is going to change with blockchain technology. However, banks need to look at Bitcoin and blockchain separately. Many of the traditional banks are keeping a vigilant eye on Bitcoin and researching how blockchain can be used. Large banks can leverage blockchain technology internally to become more efficient and ultimately to reduce the fees they charge merchants and consumers. For example, the banks could collaborate to replace the traditional ACH (automated clearing house) with blockchain technology. Reducing bank fees and improving efficiency will lessen the attraction of the Bitcoin movement. The big unknown is how regulators will respond to Bitcoin.
Myth or Fact: Blockchain will replace the internet and other platforms
If robots replace humans, then who is going to make the robots? A blockchain application will need to use the internet to communicate. Without the internet, blockchain developers have no means to communicate or transact. Claims that blockchain will supplant Uber and Lyft by allowing people to directly interact for ride sharing is far-fetched. The ride-sharing companies’ value-add is the interface and software application for matching drivers and riders. The need to support a software application for requesting a ride doesn’t disappear with Blockchain. However, the ride-sharing companies may decide to change the architecture of their applications to a distributed blockchain architecture.
Myth or Fact: Blockchain will completely transform other industries
Blockchain is a more likely candidate in industries where friction exists. Friction exists where the man in the middle is taking a large portion of the fees. Friction exists where there are numerous parties involved in a simple transaction, which causes delays. Imagine a platform where music is exchanged directly between the artists and consumers with Blockchain technology? Imagine buying property where the purchase agreements are authorized by the title company, bank, agents, attorneys, sellers, and buyers simultaneously?
In sum, blockchain will help certain industries become more efficient while it will transform others. The reality is, blockchain technology will be a nice complement to existing platforms and the internet. While the distributed nature of blockchain technology has security benefits, the scalability remains an unanswered question.
Enterprise Blockchain Solutions: Real Value-add or Publicity Stunt?
by Evan Lambrecht
Blockchain mania has reached an all-time high. In December, Long Island Iced Tea, a New York-based soft drink company, saw its stock price soar 200% overnight by simply changing its name to Long Blockchain Corp! Given the hype, boards are now asking how companies can take advantage of blockchain to support day-to-day activities. The revolutionary technology allows information to be shared in a decentralized, reliable, and secure manner, but only creates real value when applied to the correct situations.
Companies can properly take advantage of blockchain when transactions occur between three or more parties that require access to updated, decentralized information. Some practical examples include:
1. Logistics/supply chain
The most common use of blockchain is to ensure timely and accurate tracking of procurement items throughout their lifecycles from source through logistics to final destination. The best example is complex international logistics, where goods often trade hands hundreds of times in dozens of different countries.
IBM and Maersk have recently joined forces to create a decentralized, blockchain-backed record keeping system that can track the status and location of international shipments in real-time. The two companies also intend to incorporate the use of smart contracts—if-then programs that automatically execute once certain conditions have been met. This will reduce inefficiency created from paperwork that is prevalent in this line of work.
2. Asset maintenance
Asset-intensive industries require accurate, timely performance information from a variety of internal and external parties to ensure maximum uptime. Blockchain is typically used to share updated maintenance information across asset fleets.
A global compression company uses blockchain to share engine performance and maintenance information so personnel have updated manuals and like-engine failure data regardless of location. This helps reduce downtime by more than 10%. An FPSO company is also using blockchain to ensure all contractors at shipyards have updated blueprint and engineering change order information during new-build or refurb. This has reduced the amount of rework by more than 15%.
3. Joint venture contracts
Upstream oil and gas producers often enter joint venture agreements, where multiple companies engage in exploration projects together to reduce costs and mitigate risk. These agreements are intricate and include the duration, scope of work, and working interest for each party. While typically only one entity handles day-to-day operations, all other parties want to remain informed on the well’s status and performance as it directly affects their bottom line.
By storing records of all activity on the blockchain, stakeholders in the project can feel assured they are receiving accurate, real-time information. Key metrics such as daily production volumes, cost estimates, and project timelines are uniformly accessible to all in real time. The solution can also integrate with all parties’ operational and accounting systems, ensuring precise reporting and eliminating the need for reconciliations.
At the end of the day, blockchain is the real deal. But much like the dotcom bubble of the late 90s, too many people are jumping on the bandwagon without fully understanding how to best utilize the underlying technology.
What Is Robotic Process Automation?
by Mary Critelli
Remember the movie Wall-E? Wall-E is instantly loveable as an innocent and hard-working robot who just wants a friend. Arguably the cutest protagonist ever, he ends up saving planet earth and all the humans who have no idea they need to be saved.
However, cute as he may be, he does not distract from the chilling depiction of future humans—big, fat blobs practically glued to hovering chairs who rarely look away from the floating screens in front of their faces.
It’s eerie to see what could happen once technology progresses to a point where humans are rendered completely useless. Self-driving cars, automated customer service representatives, and let’s not forget actual robots, such as the ones at Amazon that fulfill orders.
Robotic process automation (RPA) is one of the latest crazes that claims to revolutionize the way companies operate, with promises to slash company’s overhead costs while instantly increasing efficiency. RPA uses technology as a substitute for human behavior within an organization’s business processes, or more simply a robot that can do a human’s job. It sure sounds cool, but after further digging, it’s unworthy of the hype.
RPA’s roots can be traced back to business process management (BPM) software. Originating about 20 years ago, BPM’s focus was to improve and optimize a company’s business processes. BPM software companies essentially were classified into two groups:
- A larger group focused the business process in its entirety with the intent to optimize, standardize, and streamline from beginning to end.
- A smaller group looked to differentiate themselves by automating business processes using technology to cut out the human element.
Unfortunately for the smaller group, their innovation did not catch on. They could not compete, and were gobbled up by the larger companies.
Why did BPM automation fail? One would think most companies would jump at the chance to replace a human with a robot, instantly cutting overhead costs and increasing efficiency. Truth is, the automation was nothing more than software that recorded an employee’s clicks and keystrokes as they performed a task in their system and then mimicked those clicks when prompted.
Now, we look at RPA. Is it new? Is it different? No. The software vendor Blue Prism invented the term robotic process automation recently with the intent to eliminate the need for business process outsourcing (BPO). However, all they really did is slap a new name on the same old BPM automation to make it sound innovative and new.
What RPA Is:
- Software robot that mimics clicks within a system
- Automation of repetitive tasks
- Set of rules applied to a business process
What RPA Is Not:
- A physical robot that actively completes a series of tasks
- A revolutionary way to cut overhead costs and increase efficiency
- Smart enough to use human reasoning to determine patterns and analyze data
RPA in Business
Automation does have its place in a business. For example:
- Interactive voice response systems (IVRS) aka most companies’ customer service help desk automated recording
- Optical character recognition (OCR) for converting scanned docs into editable data within a system
- Amazon’s physical warehouse robots used to fulfill orders
Think about a company’s back-office functions: accounts payable, accounts receivable, quality and claims, accounting, etc. Based on what RPA is, can a robot actually perform the tasks required from those functions? Consider the AP process. An invoice comes in, must be matched to the purchase order and goods receipt document, and then it can be entered/posted as a transaction that hits the accounting books. It seems easy enough to automate unless you have seen an AP clerk actually perform this task.
Back-office functions cannot be fully automated, because exceptions are common and mistakes made on the front-end would be missed.
There is the rare but perfect scenario where a company makes a purchase and receives the exact items and quantities they purchased with no shortages or damages. But wait, there’s more. When the company receives the invoice, the vendor has billed for exactly what was purchased and received, and even the taxes were calculated correctly.
But let’s get real. Typically, goods are received in partial or multiple shipments and differences in price and quantity are frequent. Not to mention all the rules that apply when calculating tax depending on how a company plans to use a product, who and where they purchase from, where they ship the product, etc. Also, don’t discount the fact that a lot of back-office employees often catch mistakes made on the front-end. Bottom line: if business processes were always executed in their perfect scenario, they probably could be automated easily. But in reality, it would be too complex and therefore pointless to mechanize functions with various exceptions and one-off scenarios.
As for a company’s business processes, rather than trying to automate them, spend the time to evaluate why each step is necessary. If a process is so repetitive, easy and mindless, why would it take a significant amount of time to complete? Is there a better way? A good rule of thumb is to always evaluate the business process first before adding or implementing any sort of technology. Just because a company could pay for a robot to do a task, why should they if the task is stupid in the first place?
Despite all the articles and rumors flying around that robots are the new humans, have no fear. Even if a robot could do your job, it is unlikely that you would be completely replaced. Fortunately, the prophecy of future humans as shown in Wall-E is not something to lose sleep over just yet.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
Old Industry, New Tricks: AR and VR in Oil & Gas
by Mary Critelli
Technology seems to be moving at such a fast pace it may feel hard to keep up. Some may disagree—maybe they thought by now we would all be living in a world just like the Jetsons, with flying cars and houses equipped with friendly robots. No doubt some millennials in Silicon Valley dressed in flip flops and hoodies are working hard on creating these Jetson-type” technologies, but there are plenty of technological advances to keep everyone occupied and inspired in the meantime.
As fast as technology moves, it is no secret that the oil and gas Industry is typically slow to adopt many new technologies. Making do with what you have and the “if it ain’t broke, don’t fix it” mentality is part of the industry’s culture. Why spend big bucks on today’s new technology when tomorrow’s is just around the corner? Not to mention newer technology is never tried and true, so companies realize they are taking a risk when implementing.
As it stands today, the oil and gas industry is not booming, and oil prices remain low. There may still be billions in potential revenue, but companies are rightly cautious to avoid major expenses. Yet, a case can be made for shelling out some cash for technology in order to permanently reduce costs in the future. The only question is, in which technologies should oil and gas companies invest?
Two recent technologies to note are augmented reality (AR) and virtual reality (VR). These terms may be familiar to gamers acquainted with the world of video games, but for the rest of the population, here are the definitions:
Augmented reality (AR) – taking a physical object and applying a digital filter to that object (via glasses, a cell phone, an iPad, etc.) in order to enhance the object and/or reveal and track more information about the object.
Example: Snapchat filters recognize your face through the screen on your phone and add features to your face, like dog ears and a tongue, a crown and scepter, etc.
Virtual reality (VR) – a simulated image or setting that an individual can explore and interact with using special equipment, typically some sort of headset.
Example: Many different VR systems exist in the video-gaming world, however, Samsung Gear VR took a different approach. Samsung’s technology and headset can be used with various Android smartphones to generate a mobile VR experience by applying lenses to the top of the smart phone’s screen. Users can then have a VR experience through their smart phone (e.g. swimming with dolphins, walking through a haunted house, or even standing in the middle of a battlefield).
It is easy to imagine the applications of these two technologies in all things entertainment, but the oil and gas Industry can also benefit greatly. Companies should take advantage of data-driven tools to maximize production while minimizing downtime without compromising safety standards. These technologies will allow companies to decrease the amount of issues with their equipment, increase process efficiencies, simplify their organization, and decrease accidents or incidents on the job. Sound too good to be true? The sections below explain how the improvements can be realized.
Increased Equipment Life and Decreased Maintenance Costs
Purchasing and maintaining or fixing equipment on a drill rig is extremely costly. So much so that when oil prices are down and cash in is low, maintaining equipment becomes a low priority or is cut altogether. The logic is to put off maintenance so as not to incur costs until oil prices rise again. But the result? Unsafe equipment and much higher maintenance costs down the road.
Today, visibility into the life and history of a piece of equipment is sitting in a system that may or may not be updated regularly. Even if the data is regularly input, it is likely still inaccurate. However, with augmented reality, finding and tracking data on a piece of equipment becomes easy and accurate. Imagine looking at a wellhead through glasses or a tablet application and instantly having visibility into valve pressure, lubrication, etc. There would also be the ability to determine if any cracks are present due to corrosion. An operator could look at a hydraulic pump and see operating pressure on the tablet’s screen. While standing in front of any piece of equipment on the job site, all relevant information around this equipment can be captured and tracked in real-time. Maintenance costs can be kept low by identifying issues as soon as (or before) they occur, saving time and money.
Increased Process Efficiency
Process efficiencies can increase greatly from these technologies as well. Tasks that typically require a significant amount of time and effort become simpler and faster with the use of AR and VR where it makes sense. Google Glass first came out 2014, and while not a bestseller for average consumers, it allowed for significant reduction in the production time for Boeing, one of the largest global aircraft manufacturers. Google Glass is a pair of glasses that acts as a type of hands-free smart phone and responds to spoken commands. When building an aircraft, each plan requires an intricate and complicated set of wires and other materials to be manually put together with the strictest precision. Before Google Glass, a technician would print out a diagram of the wires or have a laptop open for reference, looking back and forth between their screen and their task at hand. By implementing Google Glass, technicians could see the diagram through the lens of their glasses and work on wiring and constructing the aircraft while following a live tutorial. Production time was reduced 25% and error rates dropped drastically. And that was three years ago. Today’s AR devices offer even more functionality.
Simplified and Safe Organization
A specialized service technician is often needed to fix a piece of equipment. For an offshore rig, flying a specialist out to perform the fix is costly and dangerous. With AR, a specialist does not have to be physically present on the job site to perform the fix. The specialist can gain visibility of the equipment through the same tablet with which the issue was identified. Holding the tablet up to the equipment, the specialist can see what is causing the issues and walk an on-site operator through the steps to resolve the issue. This minimizes the need for a large amount of specialists who need to be available to travel around wherever a need arises. Instead, all specialists can be located in one central virtual hub and are available any time an issue arises.
In addition, companies will find they can move away from employing teams of specialists and take advantage of a large range of specialists through platforms. Many service and support companies are taking advantage of crowdsourcing, or using the internet to share information and request the help of a specialist or expert. Crowd sourcing is the 21st-century equivalent to asking a mechanic neighbor to help change a car’s oil. RigUp is a service-driven crowdsourcing platform that allows operators to sort from a list of specialists and choose the right contractor for a specific job. Instead of hiring a large team and training them with highly specialized skills, companies should consider using platforms to source one or several specialists with a specific skill set on an as-needed basis.
Furthermore, technology can change the game of training employees and reinforcing safety procedures in the oil and gas industry. VR companies can create digital, 3D versions of each offshore rig and simulate many situations an employee might encounter on each of these rigs. Without this technology, it would be difficult to train an employee for emergencies like hurricanes, glaciers, blow-outs, and other dangerous scenarios. On a more basic level, this VR training has proven more effective than traditional classroom training, as workers can get acclimated to their environment without having to be there every day. The more practice and experience a worker can gain in their environment, the less likely they are to have an accident.
Today, BP is utilizing VR software to simulate the exact conditions of a drilling operation—same rocks, temperatures, pressures, and ocean currents—that mimic completed jobs to provide a more accurate and realistic training experience. With this technology, drilling teams practice critical jobs that replicate past scenarios and allow for entire teams to work together in a group environment just as they would on a real job. BP realizes there is no better way to train employees than by letting them experience and handle actual situations they would on the rig. While the implementation of this VR software is recent, BP expects to see a significant drop in accidents and incidents.
In this era of pervasive data, implementing the proper technology to better analyze the data that drives effective decision making is crucial. As with all technology, the hardware and software that drives AR and VR technology development will continue to improve over time. The biggest challenge and opportunity is determining where your company would gain the most value from technology. The opportunities are out there, and there is not a better time to search and implement.
Making Connections: The Internet of Things
by Tanner Button
There is a lot of confusion surrounding the Internet of Things (IoT). IoT sounds like something from a sci-fi movie. However, the world has been consumed by the Internet of Things for quite some time. People carry it around in their pockets, wear it on their wrists, and use it each day to get work done. At its most basic level, the Internet of Things is simply a network of internet-connected objects capable of sending and receiving data. Amazon Echo, FitBit, Nest, smartphones, and laptops are a few easily recognizable examples.
The International Data Corporation estimates the IoT currently has 13 billion connected objects, and that number is projected to surpass 30 billion objects by 2020. This substantial growth suggests the IoT will drive major changes in every industry. Executives must understand why and how to use the IoT in order to maintain a competitive advantage.
Why Use the Internet of Things?
It is difficult to imagine a time when a person might require an internet-enabled toaster. Yet in 1990, a toaster became the “first” IoT device. This toaster was merely an experiment, but it highlights an important concept. Just because something can be connected to the internet does not mean it should be connected to the internet. Companies considering IoT opportunities should think first about the advantages connectivity provides.
There are two main reasons to invest in IoT:
- Monitor remotely
- Collect data in real-time
Smart sensors, the nucleus of IoT, allow users to monitor people, processes, and systems from anywhere in the world. For manufacturers seeking a better understanding of their supply chain, using the IoT makes a lot of sense. Sensors provide more accurate delivery estimates and real-time changes in inventory. This added visibility detects if shipments have been tampered with and mitigates damage risk. End-to-end data can be used to assess weaknesses, identify opportunities, and establish a more efficient supply chain.
How to Use the Internet of Things
Data-driven devices give companies insight into processes and operations like never before. IoT allows users to extract enormous data sets and summarize them into actionable analytics. There are four distinct types of data analytics:
- Descriptive analytics – What happened
- Diagnostic analytics – Why it happened
- Predictive analytics – What might happen in the future
- Prescriptive analytics – What to do about what is happening
Companies use IoT data to lower maintenance costs, predict equipment failures, and improve business operations. B2C companies can better understand their target market by analyzing data collected from IoT devices used by their customers.
The Industrial Internet of Things (IIoT) allows manufacturing and energy companies to leverage big data to drive future action and business strategy. The IIoT is essentially the point where traditional information technology (IT) and operational technology (OT) come together. IIoT applications use smart sensors to track inventory (as supply chain managers do) and gather data on condition-based predictive maintenance. IIoT will have a significant effect on how operational excellence is defined and achieved in the next decade.
Implementing the Internet of Things
The Internet of Things will continue to revolutionize the way of doing business across every industry, but the transition will not be easy. Companies that choose to implement the IoT will face many challenges. They will encounter resistance to change from their own organization, vendors, and clients. There will be obstacles to overcome from a security standpoint, including physical security and cyberthreats. The companies will need to be flexible as best practices, standards, and regulations evolve. Organization structures will change, processes will be redesigned, and budgets will be reallocated to support the IoT. While there are advantages to adoption, companies should look to outside resources for assistance in change and implementation management.
Artificial Intelligence
by Peter Purcell
Imagine sitting at the doctor’s office when a robot walks in, asks a series of questions, and then says, “The doctor will be with you in a moment.”
The doctor finally arrives with another robot trailing beside him. The doctor asks questions and performs a few tests, while inputting symptoms and details into his iPad. The robot actively processes all the information and disappears with the doctor to obtain “the results.” Upon return, the doctor provides a detailed packet that lists several health concerns and numerous home remedies.
What happened behind the scenes? What was the robot’s role? While the doctor had formed an initial hypothesis, the robot was able to confirm his thoughts and list all possible treatment options. How did it do that?
The robot was fed thousands of medical textbooks, countless research articles, and millions of web searches. As the doctor was asking questions, the robot was able to process all of this information almost instantaneously to come up with a diagnosis and recommendation equal to or possibly better than what the doctor could have provided.
Welcome to the world of artificial intelligence.
AI uses computers or machines to mimic human reasoning, learning, thought, and behavior. The backbone of AI is the concept of machine learning. Machine learning works by using algorithms (step-by-step mathematical rules) that tell the machine or computer how to behave. These algorithms are built so the machine or computer can understand concepts beyond what was programmed.
In essence, the algorithm sets the initial foundation of knowledge and the machine is able to take that knowledge and build upon it.
For example, an advanced AI machine was recently programmed to understand the basics of human behavior. Initially, it was taught simple things such as if someone leans in to someone else with their eyes closed and lips puckered, it usually indicates a kiss is about to happen. But, after watching hours of popular TV, that same machine now can predict more sophisticated human action and emotion before it occurs, according to an article in Popular Science magazine.
In this light, researchers studying AI often debate the definition and stages of the development of human cognition. At what point does a machine’s thinking become human-like? According to an article published by a computer science professor at Michigan State University, there are four sequential levels of AI:
Today’s AI capabilities put us in type two of the table above: Limited Memory AI.
This type of AI can track patterns in user behavior and use those patterns to predict basic future behavior. An example is smart or self-teaching thermostats. Smart thermostats track their users’ patterns of air use and learn to adjust the temperature depending on time of day and typical usage. Shifting toward type three, we have IBM’s smart computer, Watson. After winning Jeopardy in 2011, Watson moved on to more sophisticated applications, such as diagnosing diseases that were once impossible to understand.
Does this mean medical jobs are at risk? How about for other industries?
Author Dennis Gunton sparked a heated dispute when he stated, “Anyone who can be replaced by a machine deserves to be.” Harsh, right?
Indeed, many critics of AI are such because of the fear robots will take jobs away from human beings. And while that has been the case for industries such as manufacturing and call centers, automation will also create jobs in the long run. Although they are often more efficient, machines still require humans to develop, operate, and maintain them. At least for now, machines cannot accurately, genuinely emulate complex human qualities like empathy, critical thinking, and emotional intelligence. Regardless of anyone’s feelings about AI, one thing is quite clear: technology will continue to learn, adapt, and advance.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
Affective Computing
by Peter Purcell
The idea of technology detecting and responding to human behaviors has fascinated people for decades. Remember when mood rings came out in the 1970s? It was the latest technology—and fashion statement—in identifying people’s emotions. A coworker or a spouse could tell how you were feeling with a glance at your jewelry, and they could respond accordingly. The biofeedback jewelry was extremely popular and marketed as a way of understanding yourself, understanding others, and learning how to gain voluntary control of automatic emotions.
Rapid advancements in technology over the last 10 years have led to the creation of computers which possess the ability to detect and predict. Computers are now able to react to ever-changing environments, interact with animals, and exhibit human-like responses to common events. The primary focus in the affective computing discipline of computer science has been to improve a computer’s ability to learn from data points and make rational decisions in areas such as financial trading or healthcare. While smarter, faster, and more advanced computers continue to be created, we have yet to develop artificial intelligence that can correctly recognize our emotions or feelings.
The likely first response is, “So what? Why do I care if my computer can read my feelings? After all, it’s just a machine.” Let us first peek into the history of affective computing and what has been accomplished to date. The discipline can be traced to the 1995 study published by Rosalind Picard whose research centered on measuring emotions. Specifically, can a non-crystal device (mood ring) be created to measure and track feelings and emotions?
After years of research and several iterations, Professor Picard’s team created a mobile sensor, worn on the wrist, which was able to detect changes in the emotional state of the individual wearing the device. The team had essentially created a mood bracelet. The device tracked when the subject experienced fear, excitement, anger, bliss, loneliness, and other emotions. In addition to providing nice-to-know data points, the ability to measure and track emotions proved to be a major breakthrough. While machines were not able to influence an individual’s emotions, the devices were able to identify and record various emotional states through multiple situations. To answer the “so what?” question above, the data determined patterns and triggers for a variety of emotions, providing test subjects the opportunity for more self-awareness and control in the future.
Additionally, and perhaps more importantly, the new device allowed researchers to observe and track the emotional states of individuals who were unable to communicate them due to various developmental or physical challenges. The discovery immediately had enormous impact on a variety of fields.
Educators, therapists, coaches, and instructors previously left to guess the feelings of their students now had a much clearer understanding of what they faced. As a result of the device’s insight, lesson plans, therapies, and other programs could be customized to meet individual needs, skyrocketing the instructions’ effectiveness. Similar to today’s activity-measuring devices (Apple Watch, Garmin, Fitbit), researchers are creating a new wave of portable devices designed to provide guidance on which activities to pursue or avoid based on a user’s emotions. For example, an individual who is angry and tired may choose to avoid dealing with a difficult coworker altogether, or alter their approach to avoid escalating the situation further. Practitioners working with individuals unable to communicate their feelings and emotions now have the ability to interact based on live feedback and adjust their approach in a real-time manner.
While we have examined the more sociological side of affective computing, there are practical business applications.
Affective, the organization founded by Professor Picard, is working closely with corporations to create devices which can help them with design, safety, research, marketing, and communication functions. Working with auto manufacturers, the team from Affective is developing sensors that detect when a driver becomes sleepy or distracted and take action to avoid dangerous situations. In the field of e-therapy, doctors and therapists will soon have the ability to detect emotions of their remote patients and provide a more accurate evaluation of their physiological and emotional state. Affective also conducts studies on the effects of stress, which could benefit the workplace in a variety of ways, from making sure employees are not overworked to calming nervous patients in healthcare settings.
Technological advancements have greatly altered the way we live today. From self-driving cars, computers that converse with humans, smart devices that operate themselves, to machines that replicate bodily functions, humans and machines have become greatly interwoven. Most, if not all, of this interaction is based on logic and the ability to respond to and predict past and future actions.
Many feel the last frontier lies in the computer’s ability to determine and create feelings. While the ability to create and replicate human emotions is still a distant possibility, advancement and the achievements over the past 20 years have shown us the gap continues to narrow. Until that time, the ability to better understand our physical and emotional state will allow humans to recognize and change their actions and behaviors and improve their quality of life.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
Ambient User Experience, Demystified
by Peter Purcell
Ever see the futuristic crime thriller Minority Report, starring Tom Cruise?
In one scene, Tom walks into a Gap store, sporting his newly-transplanted black-market eyeballs. A computer scans his Franken-eyeball and a holographic sales associate merrily chirps, “Hello, Mr. Yakimoto! Welcome back to the Gap. How’d those assorted tank tops work out for you?”
From this quick exchange, the viewer can deduce two things: 1) Tom’s eyeball donor supported the “sun’s out, guns out” philosophy, and 2) this future world has mastered the “Ambient User Experience.”
Ambient User Experience is the idea people should be able to interact with electronic devices with minimal user interface. The devices work together, learning and adapting to the user’s habits, providing assistance in the background.
Almost to the point where the devices’ functionality becomes an unnoticeable, but critical, part of the user’s environment and life—essentially “ambient.”
The trend reasons that by connecting all of a user’s smart devices, and improving the auto-learning/pattern recognition/sensory technology of these devices, users could experience a truly smart world.
The Ambient User Experience can be broken into three unique phases:
Phase 1: You Are Here
Current technology places us comfortably in Phase 1. Individual devices learn from user patterns to simplify the life of the user with minimal user interface. Google Maps is one of the first visible examples of technology learning the patterns of users and utilizing that information to assist in users’ everyday lives.
It is a little unsettling the first time your smart phone independently informed you to leave the house now to accommodate the fifteen-minute slowdown on the commute to work.
In Phase 1, compatible devices can communicate with each other, yet devices created by different manufacturers are often incompatible and therefore cannot talk to one another. Apple devices communicate with other Apple devices, and Amazon devices communicate with other Amazon devices. Across manufacturers, there is a language barrier.
Phase 2: Meet the Jetsons
In Phase 2, all devices owned by a user or family can communicate with one another, regardless of manufacturer, configuration, or function. In this phase, the user can truly live a “smart life.”
The alarm clock monitors breathing patterns and wakes us up at exactly the right time in the circadian rhythm to ensure maximum alertness for the day. As the alarm sounds, it triggers our thermostat to raise the temperature of the room to a comfortable seventy-two, since it has learned we sleep hot, and drops the temperature to a frosty sixty-five degrees during our resting period. The morning playlist on the home stereo plays automatically while we dress for the day. As we walk out the front door, the interior lights shut off, the thermostat rises to eco-saving mode, the doors lock, and the security alarm activates. The music left playing on the stereo automatically begins to stream through the phone and wireless headphones. As we get in the car, the music switches to car stereo and the navigation system calculates the fastest route to work given the current traffic situation.
When we pull into the parking garage of the office, the workstation picks up the signal from the phone, and begins its startup process so it is ready and waiting as we walk in the door.
The main barrier preventing us from fully entering Phase 2 is the need for a common platform or software that makes device manufacturer and configuration irrelevant. The Amazon Echo and Google Home are two examples of technology beginning to break down this barrier, putting their owners into a kind of Phase 1.5.
Both devices allow the user to enable “skills” for other devices, such as smart thermostats, smart lights, and security systems. Compatible devices are manufactured by many different companies, yet they have the functionality to communicate with and be controlled through the central “smart home” hub of the Echo or Home.
However, to fully transition into Phase 2, the Echo and Home would need to make device compatibility nearly universal, and develop functionality for connected devices to communicate with one another.
Phase 3: Minority Report World
Sometime in the not so distant future, the Ambient User Experience may move into Phase 3. That is both exciting and terrifying.
In Phase 3, all devices can work for you, regardless of ownership. Let that sink in…
Following the example seen in Minority Report, the Phase 3 Ambient User Experience would use a unique identifier such as an eyeball scan, or an implanted microchip or a wearable device to identify us and adjust to our preferences.
Walk into a hotel room, and the room adjusts temperature and lighting levels to our liking. Borrow a friend’s iPhone, and the phone reflects our contacts, settings and photos instead of theirs. Billboards reflect ads tailored to us when we walk into a store, and digital restaurant menus display options based on our dietary preferences and restrictions.
The most obvious challenge to Phase 3 is developing the technology to create and facilitate this massive network of information. The more concerning and relatable challenge is user willingness to be put “on the grid.”
While the promise of a custom-tailored smart world sounds convenient, there is no denying the creepy Big Brother factor. When every device becomes potentially “your device” or “someone else’s” device, concerns about security and privacy increase exponentially. A hack to a network of this scale could compromise sensitive personal information for everyone on the grid.
And the idea of implanting microchips in people or incorporating retina scanning as a part of everyday life is enough to make even the most pro-technology early-adapter say, “ehhh, not me.”
The demand for Ambient User Experience has the potential to drive technology development into the next century. Market research has already established the average Millennial and Gen-Z consumer is highly tech-savvy and values convenience and instant information over almost anything else. They are the perfect target market for this experience.
If implemented correctly, with a focus on security and privacy, Ambient User Experience will become a way of life in the next few decades and will change the way we live, work, shop and interact with technology. However, it will be up to the consumer to draw a line to tell technology developers how far is too far, and how much information is too much information.
And maybe we don’t want everyone in The Gap to know about our affinity for sleeveless garments.
This article has been adapted from a chapter from Trenegy’s book, Jar(gone).
The Demise of Traditional Help Desks
by Bill Aimone
Traditional ticketing systems are among the most lamented in large organizations. They’ve grown to be unwieldy burdens that add to the problems they’re designed to solve.
Why?
- Rework – Tiered help desk solutions aren’t designed to solve problems upon first contact.
- Delays – It can take a long time to get connected to the right person who’s qualified to help.
- Errors – Issues are usually routed to the wrong individual due lack of communication.
- Complications – Ticketing systems are not user friendly and have overcomplicated business rules.
- Avoidance – People will eventually do all they can to go around the ticket system to get help, including calling outside the company.
- Impersonal Communication – Ticketing systems don’t provide a personal touch and automated response systems are prone to misroute issues.
The traditional solution is far from ideal.
Our Experience
The three founders of EVAN360 have worked for 14 different Fortune 500 companies, government agencies, and large private organizations. The traditional ticketing support has been an issue for employees everywhere.
The founders’ recent consulting work with other large companies exposed more problems than ever with the adoption of new technologies. They visited with several CIOs on this subject. Each had more current open tickets than employees. The backlog is unmanageable.
The complicated mess of issues, business rules, and frustration kills productivity for everyone.
A Better Way
What if there was a way for employees to access the right person to solve issues immediately? Maybe an accountant needs to tweak a report in the ERP before a board meeting. Someone in marketing might need to update their last name in the HR system after getting married. Maybe a field engineer is having trouble entering a customer sales order, or maybe there’s simply no toilet paper in the restroom.
A central hub could be ubiquitous for problem-solving. Imagine employees connecting to each other or outside help in a fast, efficient, simple way. Imagine a knowledge base that solves repeated issues while tracking performance, response time, and quality, ensuring employees are at peak productivity and happy, too.
Well, we stopped imagining and made it happen. The result? EVAN360.
The Answer to Your Ticketing System Woes
EVAN360 is a revolutionary platform that helps organizations solve problems fast. We designed the platform to be an internal support solution for larger companies, allowing them to share a common infrastructure while maintaining a secure environment to fit business needs.
Urgent or pervasive problems are no longer lost in help desk ticketing systems. Employees no longer have to wait on hold or be rerouted from person to person. Companies can use EVAN360 to immediately connect anyone in the company with the appropriate support personnel to solve issues the right way the first time. Support personnel can include internal staff, existing contractors, and EVAN360.
Say goodbye to the dreaded ticketing system experience and unproductive downtime. EVAN360 gets you back to work fast so you can focus on what matters most—growing your business.
Learn more at evan360.com.
Why ESM Is the Key to a Great Employee and Customer Experience
by Bill Aimone
Enterprise service management (ESM) is a revolutionary concept that businesses are only beginning to truly grasp. While there are some alpha adopters, the true nature of ESM is only in its infancy.
According to CIO Magazine, ESM is the next step in the evolution of ITSM. Other analysts are adopting this definition, too. ITSM is built upon a foundation of traditional ticketing and service desk solutions to manage planned and unplanned demand along with managing the systems development lifecycle.
We disagree with this definition of ESM and have adopted a more strategic view of what ESM could and should be.
ESM is a revolutionary concept, not an evolutionary concept, and it doesn’t necessarily rely on traditional means of providing service. A true ESM solution is built upon three main tenants:
Enterprise
It is truly enterprise in nature. Enterprise implies that the scope of providing service applies across the entire value chain. Traditional solutions are typically siloed to a specific business function. In the traditional mode, HR has a help line to call, IT has a ticketing solution, and customer service has online chat. None of these solutions interact with one another or fit what the customer or employee needs.
An ESM solution is a single, integrated interface for providing the customer with a way to request service, regardless of who is providing the service. A remote employee requesting help with a medical issue, a corporate employee needing help with a laptop, and an external customer seeking assistance with a product issue all use the same platform to connect to a solution provider.
Service
It is truly service oriented. Service means the solution provides a means of providing the service—not just the service request—via two-way communication. With traditional solutions, communication is largely one-way. For example, traditional ticketing solutions only allow a customer to request service and the call back is done through a second means of communication. AI and chatbots can only provide service for the small fraction of questions written within a narrow set of syntax. Typically, AI provides the wrong answers while chatbots ask the wrong questions, resulting in delayed problem resolution.
An ESM solution provides the means for requesting the service, identifying the right solution provider, and solving the issue. It’s the entire closed loop. The closed loop requires the service request to be a seamless part of the solution without multiple handoffs throughout the organization.
Management
The process is truly managed with business rules and consistent processes. Management means there is some level of built-in control, performance monitoring, and accountability. For example, traditional solutions such as call centers and ticketing systems only provide a way to manage an incoming request. Accountability, performance monitoring, and managing who, when, and how issues are resolved is a black box.
An ESM solution provides a way to monitor a request through the resolution process. This includes understanding service provider response time, cycle time by type of problem, and overall customer satisfaction. The entire process is measured, and information captured is used to understand trends and continually improve customer service.
Finding the Right Technology
Dozens of technologies claim to provide customer support, but only a few have the capabilities of a true ESM solution. While a chatbot might help an employee with Excel, chatbots would be frustrating for a customer seeking immediate assistance with a unique product issue. An integrated voice response (IVR) system might work well for a large group of unsophisticated bank consumers, but IVR is not cost-effective for providing engineering assistance to remote technicians.
Service desks, IVR, and ticketing solutions require investment in human and technical infrastructure and are bottlenecks for problem resolution. Self-service, chatbots, and AI only solve a fraction of customer problems. Phone and email requests are rarely answered as quickly as the customer expects, resulting in long wait times.
An effective ESM solution must be flexible enough to provide the highest level of service at the lowest possible cost.
- Highest level of service = immediate connection to the right person to solve a problem as quickly as possible.
- Lowest possible cost = the minimal infrastructure to provide the highest level of service.
Fortunately, that solution exists. The EVAN360 ESM solution helps companies give their employees and customers the service they deserve.
EVAN360 can be deployed for internal employees and external customers, instantly connecting them to the right expert for help. No ticketing bottlenecks and no jumping through hoops to find answers. It’s truly an unmatched enterprise service solution that can be tailored to a company’s unique needs.
Want to learn more? Visit evan360.com to explore the solution.
This article was originally published by EVAN360. For more insight on digital transformation, technology solutions, IT-related topics, and more, check out EVAN360’s collection of articles here.
Preparing for a Carveout—3 Ways to Get IT Right the First Time for the New Company
by Bill Aimone
As troubles loom in the oil and gas services sector, larger, less nimble conglomerates will be carving out parts of their business to remain competitive. The less nimble operations under the conglomerate must adapt and become more efficient once sold to new investors. Core to operations are company systems and processes. However, the large company’s legacy systems and processes are encumbered with complex configurations, integrations, and customizations requiring significant resources to support the technologies. Once the company is carved out and sold to an investor, complexities must be eliminated to allow the new company to achieve efficiencies.
This means massive technologies must be uprooted and replaced with fit-for-purpose solutions. This sounds easy, but the transition to new, fit-for-purpose systems requires time and energy from the newly formed company. As new investors look to the acquisition target company to acquire, we recommend the following three keys to ensure success:
1. Develop a process improvement vision for the newly carved-out company.
The process improvement vision will define capabilities the new organization seeks to achieve. For example, the new carved-out leadership team might want to automate the field ticketing process while eliminating the expensive data quality management processes in IT. A well-thought-out process improvement vision will quickly reveal which processes require technologies and which processes do not.
2. Create a data model to map out what specific operational and financial reporting requirements should be put in place.
This requires thinking about what level of detail and rigor is required in calculating profitability. For example, we worked with a well services company that had a complex set of allocations to calculate asset profitability under the larger organization. The newly carved-out company sought to keep profitability calculations simple. This had a direct impact on the new ERP system and allowed the carved-out company to implement a much simpler accounting system. The data model should include key metrics, calculations, granularity requirements, and information needed to make important business decisions. This also includes rationalizing and eliminating unnecessary reports to reduce wasted time.
3. Create the new IT organization from scratch.
Don’t try to force fit or accept legacy IT staff into the carved-out organization. Unfortunately, most IT staff are accustomed to working at a larger-company pace with a narrow set of skills. Your new IT staff will need to be fungible IT professionals. They must be able to quickly pivot between updating databases, providing end user support, and creating new reports. First and foremost, don’t build a full-time or dedicated IT help desk (either outsourced or insourced). There are IT technologies out there that require a zero IT help desk footprint. For example, one of our clients uses EVAN360 to source all IT support directly to the experts in the organization, thus eliminating tiered support and the help desk in its entirety.
Creating a lean IT application, data, and resource footprint for the carved-out company is achievable and necessary in this competitive environment. Trenegy has helped many companies eliminate complexities associated with a carveout and become more efficient along the way. For more information, feel free to email us at info@trenegy.com.
3 Ways to Reduce Software Entitlement Costs
by Peter Purcell
IT modernization efforts continue to complicate managing software licenses or subscriptions (entitlements). Determining when entitlements are going to expire and need renewing is costing organizations. It is difficult to predict the actual use of software by each employee or if employees intend to utilize individual software licenses in the future. Software vendors often provide multiple entitlement models yet do very little to help IT determine which is the most cost effective for them. This results in overpaying for entitlements to software vendors. After conducting audits, we found many organizations are overspending by up to 30% on vendor software licenses.
Many companies have implemented a Software Asset Management (SAM) tool like Flexera, Snow, or ServiceNow to help manage the complexities of their entitlements. Most SAM implementations focus on tracking basic entitlement assignment, cost, and compliance, but there’s additional functionality that’s often overlooked. Initial implementations offer peace of mind for passing various audits, but when it comes time to use the SAM tool, you might be missing come key capabilities. Here are three ways to take full advantage of your SAM tool’s functionality:
- Use the App Store functionality to process all software license requests
- Obtain and implement the SaaS management module
- Fully configure entitlement portfolio management
1. Use the App Store functionality to process all software license requests
SAM tools provide the ability for an internal app store as a centralized hub for employees to acquire software they need to perform day-to-day tasks (think the Apple App Store or Google Play). This makes it easy for employees to easily request entitlements and IT to manage deployments. Processes and workflows can established to allow employees to download or subscribe to approved software. To prevent employees from working around the app store, IT needs processes to quickly assess and address requests for new or non-standard software.
Employees or functions that bypass the app store need to be held accountable because accurately tracking the software used within the company is key to achieving the savings by # 2 and #3 below.
2. Obtain and implement the SaaS management module
SaaS software subscriptions enable companies to obtain tools and functionality easily, quickly, and cost effectively. So easily, in fact, that business functions will often bypass IT to get their own versions of instant messaging software like Slack, project management tools like Asana, and video conferencing tools like Zoom. The $5/month per user that’s automatically charged to a company credit card doesn’t feel like a lot of money, but costs can add up quickly. Worse, people move or separate from the company without turning off these accounts. A significant amount of money can go toward unused subscriptions each month. Unfortunately, these cannot be accurately tracked or managed by IT.
SAM tools provide the ability to manage SaaS subscriptions and work well if all requests go through the app store. The SaaS subscription module will track accounts, usage, and pricing models. These modules will notify IT when an account has not been used for a certain period of time or if price breaks are owed to the company. Unused subscriptions can be canceled or be relisted on the app store.
3. Fully configure entitlement portfolio management
Entitlement portfolio management is more than accurately tracking cost and compliance of licenses and subscriptions. Companies can use these modules to track software usage so licenses can be reassigned to people who need the tool. Everyone wants Microsoft Project as it is a premier project management software, but not everyone uses it. Tracking usage and reassigning licenses can save a significant amount of money.
Flexera, Snow, ServiceNow, and others provide companies with the ability to automatically take advantage of newly released pricing models for existing entitlements. These packages connect with third-party sites (often crowdsourced) populated with the latest pricing information. This information is used to compare with what is currently being paid. Alerts and reports are generated to help IT and supply chain to take advantage of the cost difference.
While many companies are overpaying for tools, simple changes to the SAM tool and processes can result in significant savings. At Trenegy, we have been helping global companies save significant amounts of money by better managing software entitlements. To learn more, reach out to Peter anytime at ppurcell@trenegy.com.
What IT Transformation Really Means—The 4 Elements
by Peter Purcell
A few years ago, we published a book called Jar(gone) to translate overused and often meaningless buzzwords used by consultants. Our purpose was to present tongue-in-cheek explanations of what the buzzwords really mean and, more importantly, what they don’t. Spoiler alert—buzzwords are the bane of consultants’ existence and are typically created as an excuse to try and squeeze more money out of clients.
One chapter we never published in our book was on IT transformation. The term sounds good, but if you ask ten IT executives what IT transformation means, you’ll get ten different answers. If you ask business executives what it means, you’ll receive the same answer: money for expensive IT stuff that doesn’t help me run the business. Well, maybe not exactly those words, but something very close.
Given the scenario and conflicting expectations, how does a company actually transform IT? Is it converting an IT department from a centralized to a de-centralized model? Is it changing the IT business model so it becomes a value creator? (Sorry, threw another buzzword in there.) Is it creating an IT model that drives business growth and improvement?
We suggest it’s something a bit simpler. Transforming IT involves making changes to IT processes, organizational structures, and tools to ensure predictable, reliable, secure, and cost-effective services. Do that, and the business can operate efficiently and grow as planned. And the CIO gets to sleep at night, enjoy life, be better aligned with the business, and keep their job.
4 Elements of IT Transformation
1. Predictability
What it means: behaving in a way that is consistent and expected
Providing predictable IT services is typically the result of good processes around interactions with internal and external customers. One of the first and most obvious steps is developing clear communications and adhering to response time service level agreements (SLA’s). Customers need to be confident that IT will respond in a consistent and expected manner to service requests. Hint—it’s easy to create SLA’s with a lot of padding so IT can be predicably slow. Instead, Make the SLA’s aggressive. Customers are impatient and shouldn’t be kept waiting for solutions.
A second and equally important step is to schedule system maintenance on a regular basis. Many IT shops will patch systems as vulnerabilities are identified, often causing system reboots when business least expects it. Instead, cluster this type of maintenance monthly or bi-monthly on the same weekend. This way, business knows when systems may not be available to support day-to-day operations and can plan accordingly.
Finally, there’s no getting around emergency maintenance that can take systems offline. In this case, clearly communicate to the business why the system needs to be taken down, when it will be taken down, and when the system will be available again. Don’t take the system down without notifying the business.
2. Reliability
What it means: dependability
Is there a difference between predicability and reliability? Yes and no. The bottom line is that IT can provide predictably bad service and customers can rely on systems to be down more than up (a scenario that most IT departments want to avoid). However, providing an environment that customers can depend on isn’t that difficult.
Reliable IT starts with having a clear picture of the IT environment supporting the business. Many businesses can have thousands of computing platforms running a larger number of applications. Creating a strong Configuration Management Database (CMDB), which contains a comprehensive inventory of systems and applications, enables IT to know what’s owned and what needs to be maintained. Surprises are minimized and IT can properly keep critical systems running.
End users want to know they will have access to their tools when needed. Implementing a robust monitoring system helps IT spot and address issues before customers are affected. Extra storage, CPU capacity, or memory can be allocated long before system performance degrades and end users’ ability to use the tools are affected.
3. Security
What it means: trustworthiness
Much can be written about the importance of securing IT environments from nefarious parties. IT and business need a two-pronged approach to providing a secure IT environment. The first prong is technical. Turning on dual authentication, providing strong firewalls, implementing zero trust authentication, identifying and patching security vulnerabilities, monitoring external emails, hardening customer-facing websites… the list goes on. These are table stake IT actions that all shops should consider.
The harder component of IT security lies in the hands of the business. It’s crucial to train end users to use the “software between their ears” and not click links from unknown sources, visit websites that could infect the environment, or click on infected cat videos. It requires partnership with the business to hold their team members accountable when violating these rules. Setting up a governance model so business takes ownership of this risk is key to solving this problem.
4. Cost-effectiveness
What it means: producing good results without spending a lot of money
Most business executives feel that funding IT is no different that the government funding black budgets. You spend a lot of money and trust that it’s used wisely to benefit the company, but you don’t know a thing about how it’s spent.
This can be easily solved by establishing a governance model where business stays involved in determining how IT spends money and supports the company. Not only is this a best practice, but it’s also compliant with COBIT 2019. Big hint here: when hosting the governance meetings, IT should not drag business through a discussion of how drivers are being updated on windows laptops to ensure Bluetooth devices can more easily connect. Keep the discussions at a strategic level. Use the business to help prioritize initiatives and address issues and roadblocks.
Trenegy has been helping companies transform IT into high performing organizations for the past 12 years. Contact us to learn more about how business and IT can work together to make sure IT is predictable, reliable, secure, and cost-effective. Our team is made up of some of the best consultants you’ll find.
3 Things IT Needs When Implementing a Patch Management Program
by Lauren Conces
Patch management is a fundamental part of IT security. Regularly patching your infrastructure and applications and keeping systems up to date is vital to vulnerability remediation.
Lack of regulated, cyclical patching events increases risk of hackers exploiting vulnerabilities and leaving systems compromised. Without proper tracking of patching efficacy through regular vulnerability scans and detailed reporting, it’s almost impossible to evaluate security levels. With non-structured patching practices, the business is more likely to receive unexpected or last-minute notices of patching-related downtime, which doesn’t help IT’s reputation.
The patching process tends to be unstructured in many companies, including large corporations with infrastructure hosting highly-critical systems. As cyberattacks increase, it’s even more crucial to have strong and proactive patch and vulnerability management programs with accurate reporting.
If IT coordinates the patching process well, communicates internally and with business customers, and performs activities in a controlled manner, it’s possible to successfully stack patching and reboot activities across multiple support teams into a consolidated time period—even a single weekend.
These are the three things to prioritize:
1. Planning
Hosting in-depth facilitated workshops with representation from each application and device support team is a great way to begin development of a complex patching process. General best practice is to standardize patching activities into a monthly schedule and stack as many reboots as possible to reduce application downtime. Create a detailed implementation plan with tasks, points of contact, support teams, and estimated timing.
Patching schedule considerations:
- Pre-loading and pre-testing of patches
- Patching of independent or redundant systems
- Required shutdowns and subsequent startups
- Patch application and reboots
- Post-testing and validations
- Patching critical vulnerabilities separately for expedited remediation
When workshopping the sequencing of tasks, the primary focus should be on identifying dependencies between activities. For example, X server needs to be down for Y application patching (certain batch jobs require completion before reboots can occur). Take advantage of any HA (high availability) capabilities that allow patching to occur without end user impact. The more investment going into HA and increasing redundancy, the easier it will be to consolidate patching activities into a singular time frame.
When scheduling patching events into a standardized time frame (i.e. bi-monthly), make sure to plan around key events such as month-end close or year-end close.
2. Communication
Complex patching schedules require several streams of communication. Newly implemented patching cycles require an extra level of care in distributing messages across the organization.
You can split communications for each cycle into three simple categories: what’s about to happen, what is happening, and what has happened.
What’s about to happen: Not only should IT be fully aware of scheduled patching activities, but the business should be alerted on applications that may be impacted and have methods to defer non-critical patches to avoid downtime. Top-down communication across the enterprise should emphasize the importance of patching systems, encourage support of IT’s patching program, and discourage deferrals. End-users should also be informed of impact on key processes (i.e. payment systems).
What is happening: Throughout patching events, regular updates on activity completion should be sent from a single source (one person or team) for consistent messaging. It’s important to note in these communications how far ahead or delayed activities are so teams can prepare to start at a different time than scheduled.
What has happened: After the implementation of a new patching process (hypercare period), ensure lessons learned or changes in the process are communicated. As a team, evaluate successes and failures to drive improvement across your environment’s architecture and design.
3. Execution
If you want to stack patching into a tight timeframe such as a singular weekend, there are a few things to plan for:
Coordination: There are a couple of different ways to coordinate patching events. If IT has a good change management system and typically tracks projects through change requests, the event could be coordinated by change tasks. This method is recommended only for developed processes where everyone is fully aware of their responsibilities. For newly implemented processes, it’s beneficial to have a singular coordinator (project manager) responsible for tracking progress, providing updates, paging out teams to begin tasks, leading a bridge call for issue resolution, and escalating issues to management.
Escalation: Strong patch management programs require a core team of individuals who are knowledgeable about the process and inner workings of the IT environment. During hypercare, at least one person from this group should be available to troubleshoot issues and provide guidance when systems are unexpectedly impacted. Support team leadership should also be available for escalation in case the primary POC is non-responsive.
Tracking: If you’re running a security-based patching process, tracking vulnerabilities from identification to resolution will help IT determine the efficacy of deployed patches. The vulnerability management process should feed into patching, but don’t underestimate the importance of continuing to track the targeted vulnerabilities after patch deployment.
Three Pillars of Software Change Management
by Lauren Conces
“The only constant is change.”
This old adage applies to many facets of business—especially IT. There will always be a growing need for reliable and secure technology.
What’s important is managing these changes from conception through implementation to achieve your organization’s end goals. Your company can deploy state-of-the-art systems with ample functionality, but without strong change management backing the rollout, implementation will quickly deteriorate into more of a problem than a solution.
Change management is not only required for cross-functional, procedural changes, but also for any newly implemented software or functionality, system upgrade, or smaller system configuration. Following are the three pillars of software change management you need to know.
1. Impact Analysis
Most infrastructure-related changes require some form of impact analysis. Suppose IT decides to change the time at which a system is shut down for maintenance. It will be critical to identify and communicate potential impacts to downstream applications and data flow during this time. If a change brings down a critical system or application, it will hinder the end-user’s ability to carry out essential operations.
Performing impact analysis is easier if IT’s system architecture is already mapped with connections between infrastructure and applications. If your company uses an ESM platform, like ServiceNow or Jira, take advantage of available service mapping capabilities and prioritize mapping out critical services.
Evaluate the application or device impacted by the change to see what items are affected and consult the associated IT Service Delivery Manager or IT Application Owner on the change before implementation. In general, if the change could affect end users, respective business units must be made aware and provide approval.
2. Testing
Once the change is agreed upon and has passed through the development stage, it must then go through UAT (User Acceptance Testing) prior to deployment in the production environment. UAT allows the business to test the newly implemented software, functionality, upgrade, or configuration.
Unfortunately, UAT tends to be overlooked due to budgetary and resource constraints, but it should be regularly enforced by change management to mitigate risk of issues after go-live. In production, issues are much more difficult and costly to remediate. UAT will ensure alignment between IT’s end goals and functionality provided by system developers.
3. Communication
Although communication departments may not technically fall under change management, the most important pillar is communication. Communication plans should be made prior to development and provide details on communications to be sent out, ownership, and timing.
Consider the following ways of providing thorough and effective internal and external communication when planning for a change:
Internal
Early Involvement: Prepping internal resources for the change in the early stages of development is key to user acceptance. Even if the messaging is vague to leave room for changes in development, end goals and primary benefits should be communicated throughout IT and the rest of the enterprise to rally support around the change and upcoming trainings.
Customized Training: Training internal resources is essential, particularly if the newly implemented functionality results in a process change. Adjust your training and method of delivery based on the scope of the change. Consider creating required training modules for new hires.
Support and Accessibility: Evaluate how open the line of communication is between change makers and the change recipients. If recipients have questions or feedback, make sure they know who to contact and how. Large organizations may leverage help desks or other support software. We recommend EVAN360 for the most immediate, personalized support.
External (end-user)
Concise Messaging: With end-user messaging, err on the side of conciseness. Remember to “feed the goldfish,” or in other words, operate as if your end-user has the attention span of a goldfish. Feed short and succinct messages on a regular basis. This allows for better retention.
Feedback Channels: Capturing end-user feedback helps in refining recent changes in processes and system configurations. It also helps determine the effectiveness of change management (i.e., did customers have issues accessing promised functionality or understanding why the change was made?).
CMDB Basics: Properly Commissioning, Maintaining, & Decommissioning Assets
by Peter Purcell
Tracking IT assets is critical to ensure a company is getting the right value and return on investment out of IT. It seems like a tedious process to update critical asset information every time a laptop is updated or a new server is commissioned. And it is! However, not tracking assets can cause major problems down the line. Trenegy has seen organizations lose millions of dollars per year to paying for licenses, subscriptions, or maintenance on assets no longer supported or in use.
A company needs to understand and know what is owned to ensure assets are managed correctly, the right amount is paid for licensing fees, and service levels are met. Asset tracking is typically done using a CMDB (configuration management database), which is a database of a company’s assets (laptops, software, network devices, etc.).
Setting up an asset in the CMDB correctly at commissioning (when an asset is added to the portfolio) is critical. This way the asset can be tracked throughout its lifecycle, enabling IT to know when to update or retire hardware and systems. More importantly, a well-designed CMDB can track incidents and provide a clear view into trends. This information can be used to enable preventative maintenance, increasing reliability and predictability.
Implementing a CMDB is not hard. Maximizing the value of the CMDB can be achieved by starting with a good data model and then implementing strong commissioning, maintenance, and decommissioning processes.
Start with a Data Model
Create a data model to determine what kind of information IT needs and who needs it. There’s essential information that should be maintained, including end of support, end of life, criticality of the asset, etc. Specific teams within IT will need differing information. For example, the Enterprise Architecture Team might need data the Operations Team doesn’t and vice versa. Make sure this information will be captured.
Next, it’s important to develop a standard nomenclature to avoid confusion. For example, “criticality” might sound straightforward, but it could mean different things to different people.
Having this information readily available will drive policies and procedures around commissioning a new asset. It also allows for insight into incidents, such as how often a server goes down, so you know when it needs attention.
Properly Commission Assets
Bringing in new assets requires strong policies and procedures to ensure they’re secure, needed, and within budget.
Commissioning an asset needs to be considered as part of any project that introduces new hardware or software into the organization. Project teams need to be held accountable to ensure that the asset and associated information is entered into the CMDB before it is added to the environment.
As part of the commissioning process, we recommend setting a hard and fast rule: an asset should not be up and running unless the CMDB has been properly updated with complete and accurate information.
Maintaining Assets
Maintaining assets is necessary to ensure reliability and predictability. Maintenance policies and procedures should be established to ensure that any change made to an asset is properly reflected in the CMDB, even when maintaining assets “under fire” or during emergencies. Every time an asset is touched, the information in the CMDB needs to be reviewed and updated—even for something as simple as a Windows update on a particular server.
This helps ensure companies don’t waste money by retiring or replacing an asset too soon. Better, it enables an IT department to quickly respond to cyberthreats. Knowing exactly what is owned and how the assets interrelate make responding to issues and threats much quicker and less expensive. Chasing down assets because the CMDB is not up to date is a waste of time and money.
Decommissioning Assets
It is critical to update the appropriate information in the CMDB when an asset is retired or no longer needed. This ensures time isn’t wasted looking for an asset no longer in the portfolio. Or worse, paying for licensing or support on an asset or subscription that is no longer in use.
At Trenegy, we’ve helped organizations save significant amounts of money by properly tracking, maintaining, and decommissioning assets so they understand exactly what they own. To talk to our team about improving your IT organization’s ROI, email us at info@trenegy.com.
ChatGPT: Common Questions & Potential Impact
by Todd Boutte
No consumer application has grown quite as rapidly as ChatGPT. If you aren’t familiar with ChatGPT, it’s an artificial intelligence chatbot that’s far more advanced than the average customer service chatbot. It was created by OpenAI using natural language processing to imitate a human as best as possible. The outcome is pretty impressive.
The AI chatbot reached 100 million monthly users in the first two months since its launch in November 2022. Even the largest and most profitable apps (Google, Facebook, YouTube) weren’t adopted at such a high rate.
It interacts in a conversational, humanlike way and has the ability to write and debug code, solve math equations, write full essays and articles, answer questions and follow-up questions, give instructions, and much more.
Common Questions About ChatGPT
The app has garnered some strong reactions—excitement, fear, skepticism. Will ChatGPT replace Google? Will ChatGPT replace employees? Can you trick ChatGPT into learning incorrect information? Some aspects of the app remain a little mysterious. Below, we address a few common questions people are asking.
How does it work?
While the exact ins and outs aren’t published, we asked ChatGPT itself where it gets its information. Here’s what it said: “ChatGPT has been trained on a diverse range of internet text to generate human-like responses to questions and prompts. This includes a wide variety of topics, such as news articles, scientific papers, historical documents, and fiction, among others. The model’s training data is sourced from the web, and its training process uses deep learning techniques to learn patterns in the text and generate responses based on that knowledge.”
It’s answer is a bit vague, but we know it’s doing more than just pulling responses from Google. It’s trained on a variety of sources, and it’s also continually learning from interactions with users.
Will ChatGPT replace Google search?
The app is currently intended to interact with people and learn, not serve as a search platform, although it does have similar capabilities. We don’t suspect it will completely replace Google search, at least not any time soon. However, Google will inevitably lose some traffic to ChatGPT as people figure out what it can do.
Think of it this way: Suppose you ask both Google and ChatGPT, “What is lease accounting?” Google will give you a list of sources on where to find that information. You’ll click the source that seems reputable and offers a digestible explanation. ChatGPT will give you one understandable explanation and answer follow-up questions. What ChatGPT doesn’t currently do is provide the most up-to-date information or offer insight from multiple sources, so it’s not a true replacement for Google.
In fact, the real competition against ChatGPT isn’t Google. It seems to be specialty sites and forums, some of which prohibit the use of ChatGPT altogether. These are places where people can get the back-and-forth interaction needed to solve problems and have actual conversations with people who know what they’re talking about.
Can you trick ChatGPT into learning incorrect information?
There’s not a solid answer for this, but if we had to guess, there’s probably not enough momentum to steer it. It would likely require millions of interactions. It’s not like trying to influence one person—it’s more like trying to influence hundreds of thousands of people at a time. ChatGPT gets it’s information from a large variety of sources, so to completely misdirect it would be difficult.
Will ChatGPT replace employees?
Most jobs require some level of human intervention, so it’s not likely to replace jobs. After all, ChatGPT has to have an input.
It does supplement and make some jobs easier. Take microblogging, for instance. Marketers and writers are already using ChatGPT to write or draft blogs and articles that require minimal edits. The job still requires a human to maintain and distribute articles.
ChatGPT also has the potential to make programming and developer jobs easier by writing complete source code. What it can’t do is peer review code. Developers can use the app to augment their work and remove some of the frustration and repetition. But anything ChatGPT produces still requires validation.
In short, it won’t entirely replace employees. It will just help them be more efficient and solve problems quickly.
The Future of ChatGPT
There’s a lot of room to grow with ChatGPT. Right now, we would classify it as a supplement—not a replacement—for your job or organization. Like any new tool, there are lots of possibilities and pitfalls, but it will likely become part of every organization’s technology arsenal in the next couple of decades.
For an even deeper dive into ChatGPT, listen to our podcast episode, ChatGPT: Implications for Business & Beyond, featuring Trenegy’s Technology Lead, Todd Boutte.
Considerations Before Using ChatGPT for Business
by Todd Boutte
With the rise of ChatGPT, we’ve heard a lot of talk around its potential in the business world. People are wondering how to use ChatGPT for business. We expect the app to have an influence across organizations as it continues to grow. Whether it’s ChatGPT specifically or another AI of ChatGPT’s caliber, companies should consider what changes might arise in the years to come. It won’t completely replace human jobs anytime soon, but it certainly has the potential to make organizations more efficient and effective.
For organizations that might be taking advantage of this technology in years to come, here are a few key considerations:
Identify Use Cases
Before any new technology implementation, it’s important to know exactly how it will be used and by whom. One major area in which we see ChatGPT (or a ChatGPT-like technology) functioning is knowledge management. Employees are almost always relying on tracking down a human, a manual, or a vendor when searching for information. What if their organization’s knowledge base was more portable and accessible? A virtual assistant with the conversational ability and accuracy of ChatGPT could potentially make knowledge management significantly more efficient. It would be like talking to a person who knows the ins and outs of every legal document, land record, contract, employee handbook, etc. who can answer follow up questions, make connections for you, and spot trends. But instead of a person, AI of this caliber has infinite bandwidth.
Find an Implementation Partner
It’s important to team with an implementation partner when implementing any new AI solution. There are many companies that have been in the AI space for a long time and know the ins and outs of a tool like ChatGPT. They are going to have the most knowledge when it comes to bringing such a powerful product into organizations. They will know how to set boundaries, prioritize security, and create buy in.
Most importantly, an implementation partner will help the tool pull information from the correct data by plugging in policies, procedures, and the entire framework from which the AI learns. Essentially, they will set the AI up to provide correct and complete information, including HR documents, instructions from vendors, legal documents, employee guidelines, troubleshooting tips, contracts, records, safety policies, and more.
Allganize is a natural language search solution company that has launched this type of solution. Check it out here.
Set up a Governance Model
Once ChatGPT or another similar tool is in place, it will require governance. It’s not simply a tool you set and forget. Organizations will have to treat it as both technology and “employee.” It’s almost like a new business analyst that requires more attention and training up front and eventually learns the ropes. As time goes on, organizations will still have to examine it for accuracy and completeness. But instead of requiring programming to improve its behavior, it relies on feedback.
The Key Takeaway
ChatGPT is a powerful tool that has the potential to save a lot of money and time. But it’s important to determine how you’ll use ChatGPT for business, partner with the right people, and ensure it’s managed properly. Right now, it seems most organizations are still in the consideration stage as they learn how it works and start to evaluate where it might best serve their business. Evaluating use cases is key to ensuring this type of tool will create value, efficiency, and a worthwhile return on investment for your organization.
The Future of Business Intelligence
by Todd Boutte
Business intelligence (BI) is a simple concept. It involves 1) collecting data pertaining to your company from internal and external sources and 2) finding a way to distill it into something actionable. Essentially it involves harvesting the data you need to make good business decisions.
Today, the term “business intelligence” usually refers to the software or tools organizations use to turn data into usable information. It’s come a long way in the last 10 years, and with the recent growth of artificial intelligence, BI tools have powerful potential.
How Business Intelligence Has Changed in the Last 10 Years
In the last decade, companies have consolidated their business systems around a few key players (Microsoft, Oracle, SAP). Microsoft stands out because they’ve built a product into an ecosystem that thousands of organizations use every day. Microsoft 365 developed Power BI as a stand-alone product within the last 10 years.
More than 10 years ago, tools were difficult to use. They required people with specialized skill sets to write code and gather data to turn it into usable information. Microsoft, however, as a leading data architect and office productivity company, has key contributions in the simplification of business intelligence. With Microsoft’s Power BI, companies no longer need an army of database administrators and developers to handle data. It has become more of an intuitive, self-service business intelligence platform people can use themselves.
The Future of Business Intelligence Is Artificial Intelligence
Since Microsoft is a key player in this industry, they’ve already included some basic AI tools within Power BI. One of those is a Q&A box that can be included in a report to make it easier to find information. A user can ask, “What was the revenue in Q1 of 2022 vs. 2023?” Power BI will do its best to pull that information.
As Microsoft continues to develop AI capabilities, we expect users will be able to ask even more complex questions and follow-up questions, just like with ChatGPT.
We also expect AI to be able to examine a set of data and make inferences based on that data. For example, suppose a company needs to review customer feedback on a product line but has 10,000 customer reviews. That’s a lot for one person to parse through. Instead, there’s potential for AI to step in and find common threads with language without having to spend hundreds of human hours looking through reviews. Instead of merely taking a 5-star review at face value, AI could analyze what was said about the product—because a 5-star review isn’t always meaningful if the text says otherwise.
It’s important to note that, no matter how advanced AI is, organizations shouldn’t fully rely on AI to extract data and make inferences. Human intelligence will still be required to manage AI tools and make sure they’re pulling the right data, making accurate inferences, and interpreting language correctly.
AI is all about saving time and allowing employees to add more value to the organization. As AI advances, it will alleviate a lot of time-consuming activity and allow employees to focus on the strategies and conversations that will drive business decisions.
The Right Mindset for AI
Remember, AI is a tool. It’s not a decision maker or a business strategist. While it can replace a lot of human tasks, it doesn’t replace a human. The people using AI tools are the key to making AI tools successful. The right tool in the wrong hands won’t solve anything. But if used correctly, AI has the potential to add significant value to organizations.
A Word on Best Practices
When it comes to managing data the following practices are crucial, with or without AI.
1. Establish Good Governance Around Data
Setting standards around creating data is key. We’ve seen companies that have multiple people entering the same data in their system under different names (e.g. GE, GE Power, General Electric). When someone asks to see information for GE, the data isn’t accurate. Organizations must have good governance and data ownership on the front end so information is centralized.
2. Define & Agree on Metrics
It’s important to agree on and define which metrics are to be tracked. Know what’s included in each metric and what’s not. If people realize data metrics aren’t consistent or correct, they won’t believe the data. They’ll be more likely to create their own databases that are more consistent with the data they need.
Terminology is critical. In many organizations, different departments or divisions within the company have different definitions for the same word. But there shouldn’t be any ambiguity on a well-built report. It’s important to note that BI software or tools can’t solve this problem. It’s about the processes around business intelligence and the people involved. BI software is maintained by people, and the processes for maintaining it must be clear and thorough.
Crucial Components of an AI Implementation
by Peter Purcell
There are some critical components when implementing AI in your organization. The general process isn’t all that different than implementing an ERP system or other type of software. It’s important to develop a strategy to obtain the maximum value out of this transformative technology. Below are some of the critical components to account for in your AI implementation plan.
Define Objectives & Understand Business Requirements
Determine which problems the AI tool will solve and how it will solve them. Don’t just implement AI for the sake of AI. It’s important to define where and how AI will provide real value to your organization. This involves mapping out current and desired future-state processes to find inefficiencies and understand where AI fits in.
Turn general ideas into specifics. In doing so, you will eliminate confusion and establish a clearer vision. With a clear vision, it’s easier to create buy-in.
Align the Organization
Everyone should understand why AI is being implemented, why it’s beneficial, and how it will create efficiency. Change of any kind can be challenging, so it’s important to garner support early on.
Aligning the organization also includes identifying who is responsible for each process impacted by AI and who will be responsible for maintaining the new technology going forward.
Additionally, an implementation team will be vital for project success. Involve the people who have the right skills, will champion the new technology, and are committed to seeing the implementation through to completion.
Select the Right Tool
If several options are on the table, here are a few steps to narrow down your list.
- Create requirements based on the business objectives for the new AI tool.
- Leverage the requirements to perform research to identify and short list companies that provide tools supporting the requirements. Hint—use ChatGPT to help you here.
- Request demos and ask for the companies to provide a clear understanding of functionality, cost, support model, and product development cycles.
- Request references because AI is still relatively new, so certain products might be readily available but still in the testing phase.
- Involve the right people who have valuable insight. Expanding involvement from team members can often plug holes and confirm whether or not the new tool will support the organization’s requirements and eliminate inefficiency.
- Evaluate long-term value. Consider if and how the tool will be used in the next year and beyond. Is it scalable? Are the expected benefits worth the investment?
Find an Implementation Partner
It’s important to consider teaming with an implementation partner when adopting any new AI solution. Many companies have been in the AI space for a long time and know the ins and outs AI and how it functions in a variety of business environments. They are going to have in-depth knowledge on how to introduce such a powerful tool. They will know how to set boundaries, prioritize security, create buy in, and optimize your investment for the future.
Rollout & Training
Develop a plan for how you’ll roll out the new tool across the organization. When it’s time for rollout, provide role-specific, hands-on training to employees. Focus their training on how the AI relates to their role and how they’ll use it. Throughout training, encourage employees to share their feedback. Their insights can help identify areas that need additional focus or improvement.
Establish Governance
AI requires ongoing governance to monitor and maintain. It will need to be examined for accuracy on a regular basis to ensure it delivers correct, up-to-date information. It will also need to be examined for effectiveness and consistency to ensure that AI stays intelligent and isn’t misusing or misunderstanding information. We recommend establishing a regular schedule for maintenance and/or review.
What to Know Before Implementing AI in Your Organization
by Todd Boutte
There’s a growing curiosity around the role of AI in business. People are wondering where AI fits into their organization, which AI apps are useful, and which can be ignored. ChatGPT set the recent AI surge in motion, making AI more tangible and accessible to the masses. It’s no longer a vague concept. We’re seeing real world use cases of AI influencing the way we work. However, as organizations shift their attention toward AI, there’s a lot of uncertainty around how to approach it.
It’s important to understand the implications of AI and how to think strategically about where it fits in your organization.
Below are a few recommendations for how to approach AI amid the hype and countless applications available.
1. Identify Where to Use AI
While AI holds great promise, not every application will be beneficial or necessary for your organization. It’s crucial to identify the areas where AI can provide real value to your business. This begins by identifying the challenges and pain points within your organization that could be alleviated with AI. Start by focusing on the problems that need solving rather than the solutions.
Some key questions to answer during this process:
- Customer responsiveness – Are there areas in the organization slowing down customer responsiveness?
- Repetitive tasks – Are there pockets of highly repetitive tasks requiring a lot of people to accomplish simple objectives?
- Market competition – Where are the pain points when going head-to-head in the market? Where can you leapfrog the competition?
- Processes – Are there bottlenecks in processes that prevent the organization from being nimble?
- Knowledge sharing – Where are there opportunities to improve knowledge sharing within the organization?
Bottom line: Don’t implement AI for the sake of AI.
2. Understand AI’s Current Capabilities & Limitations
As of 2023, AI has strong capabilities, but it’s not a magic wand. AI is not capable of creative thinking, understanding human emotion, or strategic planning—these are areas where humans excel. AI can’t process nuance to the same degree. For now, AI should not be seen as a replacement for human labor, but as a powerful tool to make us more efficient.
To effectively implement AI, certain skillsets will be required to monitor, maintain, and continually reevaluate AI as it grows. So humans are still part of the equation.
3. Start Small and Scale Up
A common mistake organizations make when implementing AI is trying to do too much too soon. A better approach is to start small, test, and learn. Choose a specific process, task, or business function that could benefit from AI. Implement, test, measure the results, and learn from the experience. For example, you might use AI to analyze your sales data and identify patterns that can inform your sales strategy. Once you’ve seen success on a smaller scale, you can gradually scale up and apply AI to more complex tasks and larger business functions.
4. Develop an AI Strategy
Just like any technology or ERP implementation, incorporating AI requires thorough planning, analysis, and training. While many employees are using ChatGPT (and possibly a few other tools) for their own purposes, anything implemented across the entire organization will impact processes, roles, budgets, and overall operations.
Some of the major steps an AI strategy should include are:
Aligning the organization – A big hurdle during any implementation is aligning the organization around the initiative. Everyone needs to understand why the AI tool is beneficial, how it will be used, who has ownership, etc. Change can be difficult, so creating support early on is important. Organizations will also need an implementation team with the right skills, decision-making authority, and follow-through.
Understanding business requirements – What are current processes and how will they be improved with AI? For implementation success, business process requirements must be laid out step by step. Specifics are important when mapping out processes. It eliminates confusion and lack of direction in the long run.
Training and clear communication – Decisions should be communicated among the project team throughout the project, and when it’s time to roll out the new tool, training will help employees confidently use the new system. Training should provide employees with the initials tools and knowledge they need to understand the system. Usability directly influences adoption rates.
The strategy will likely also include selecting the right tool, finding an implementation partner, rolling it out, and refining roles and responsibilities—the same things required for a successful technology implementation of any kind.
Connect with Trenegy for more non-traditional insights.