Five Tips for Building the Right Multi-sided Platform for Your Business


Commerce as we know it today is in the beginning phases of a paradigm shift. The traditional pipeline business model, where producers have provided a product or service and pushed it through a channel down to consumers, is being replaced. A new business model known as the multi-sided platform is gaining traction in the market. The multi-sided platforms are directly connecting the mass of buyers and sellers in a single electronic marketplace.

Amazon is strangling traditional retail stores. Uber is decimating the taxi industry. Airbnb is keeping hotel chains on the edge of their seats. Most companies recognize the new economic trend. Yet awareness is only the first step in a long and complicated process to shift the traditional pipeline model toward utilizing platforms effectively. Traditional companies do not know all it takes to successfully implement platform technology. The companies are throwing more money into projects that do not produce the desired results. The lack of understanding of how multi-sided platforms work and create value is causing failures and delays.

So, what exactly is a multi-sided platform?

According to Platform Revolution by Parker, Van Alstyne and Chouda, a platform is defined as a business model which brings together producers and consumers to facilitate the exchange of information, currency and goods or services. The platform marketplace generates value by connecting communities using technology. The connection reduces friction by enabling mutually beneficial transactions to take place between two or more parties without a middleman.

Another key element of many successful platforms is the use of crowd-sourcing. Crowd-sourcing is the method of gathering either funding or resources from outside the company, usually through various channels on the internet. By leveraging third party resources, platforms can remain nimble and scalable while keeping fixed costs low. Platforms have virtually rewritten the book on valuation by de-linking assets from value. This unique approach has allowed companies such as Uber and AirBnB to receive appraisals in the tens of billions of dollars without owning a single car or home, respectively. Combine crowd-sourcing with the concept of positive network effects – the idea that the more people use the platform, the more valuable it becomes – and we begin to understand how multi-sided platforms are achieving unprecedented levels of growth and success.

Ok, where do we start?

One of the biggest misconceptions about successful platform technology, you must completely rewrite your business plan, hire expensive software engineers and move your company to Silicon Valley. This misconception is simply untrue. Many companies in traditional industry segments leverage platform technology to increase efficiencies and reduce friction without changing their overall strategy. Moreover, the new platforms are popping up across the country. We have assembled five tips for building the right multi-sided platform for your business below:

1. Understand the User Journey

Many projects nose-dive from the start due to failure to focus on the most important aspect of any multi-sided platform: the user journey. Before development begins, a platform must clearly articulate the specific type of end-users it is targeting and the overall experience it wants to deliver. The definition can be achieved by creating two crucial items– user personas and journey maps:
User Personas: Creating a series of user personas will clearly define the type of people the company envisions will be using the platform. For example, one persona for Uber might be Jeff, a 21-year-old biology student who needs a ride home after he’s had one too many at the bar. Another persona could be Sharon, a 30-year-old project manager who lives in Chicago without a car and needs to get to and from work every day. By understanding the customers who will be using the platform, developers and marketers can better tailor the technology to the demographic target market.
Journey Maps: Journey mapping is the process of visualizing and understanding a user’s journey he or she goes through to accomplish a goal while using the platform. Journey maps are like process flows, yet with more emphasis on the user experience. Journey maps are used to identify user needs, emotions and pain points before development begins.

2. Develop Wire Frames

First impressions are important and creating a clean and inviting interface is important. In many cases the user interface is more important than the actual functionality of the platform application. Wire frames are skeletal outlines of a website or application that depict exactly how every screen and step throughout the process should look once the final product is up and running. Wire frames are an essential piece in the planning phase because they communicate the overall aesthetic and usability to developers, marketers and potential investors. Wire framing often requires the enlistment of graphic artists to ensure high quality.

3. Scope the MVP Appropriately

Contrary to its name, an “MVP” in this context has nothing to do with sports. The acronym stands for Minimum Viable Product, which is the first functional version of a new piece of technology. While you may have grandiose plans for how your platform will ultimately work, a trimmed down version allows a company to start with something small instead of spending years trying to build the perfect application. An MVP is essentially a skeleton of the product. It is just functional enough to communicate your message and experience given a very limited budget. Do not get caught trying to boil the ocean here, or your platform will never make it off the ground. The purpose of the MVP is to hear feedback along the way to understand what the users desire in the application. Only after you have wooed your intended audience and listened to their feedback, you can start adding the user desired bells and whistles.

4. Define Test Markets

In a similar vein to the MVP discussion, the initial test markets for your platform must be properly identified. Test markets are small, specified groups used to test the viability of a platform before its’ official launch. Test markets are most commonly siloed to a certain geographical region, such as a city, or a specific demographic of “test users.” The test markets must include a sample of users you anticipate will be using the platform going forward. Otherwise, your test marketing data will be unreliable and will not coincide with how your user base will react when the platform is launched. Third party firms that specialize in this type of testing are often hired at this stage in the development process. Accurate test marketing is critical because it allows for last second tweaks if any key issues are exposed.

5. Prepare for Failure

No one likes to dwell on the potential of failing, yet the sobering truth is many platform launches are unsuccessful. At least some aspect of every platform endeavor will fall short of expectations. The key is to be realistic when setting goals and to remain motivated even when parts of the plan do not come together perfectly. It is wise to face the threat of failure head-on by creating a backup strategy in case your platform is ever at risk of falling on its face. The back-up strategy might be other uses for the platform or something as simple as changing the target market for the platform. Favor started out as a “burritos and beer” delivery service and is now connecting delivery drivers who can deliver anything from food to school supplies.

Whether the goal is to transform the company into the next tech unicorn or simply to create more efficiencies within your own walls, these five tips apply. At Trenegy, we have developed a comprehensive Multi-sided Platform Development Checklist that contains a guide to make sure any company does not miss a step. Contact for your complimentary copy today.

Master Data Management: The What, Why, Who and How


It is day three of driving a brand-new, shiny SUV around town when the letter carrier delivers an unexpected letter from an unfamiliar tire manufacturer. The letter explains the tires on which the oh-so-pretty SUV sits have been observed to unexpectedly explode when travelling at high speeds. After remembering going eighty-five down the highway in a rush to get to work yesterday, the following question arises – how do tire manufacturers determine who is driving on their tires? The answer – Master Data Management (MDM).

What is MDM?

MDM is a process-based model used by companies to consolidate and distribute important information, or master data. The idea is to have an accurate version of master data available for the entire organization to reference.

Master data is the agreed-upon core data set of a business. As opposed to reference or transactional data, which could be something as mundane as the amount of invoices completed in a day, master data refers to data directly linked to the meat of a business.

Master data varies depending on the organization and industry, but typically includes detailed information about vendors, customers, products, and accounts. Master data is critical, because conducting business transactions without it is near impossible. Without first establishing product codes for a particular model of tire, the manufacturer would not be able to track which tires are sold to which customers.

Why is MDM Important?

In the example above, the only way the tire manufacturer would have the correct customer information for the owner of the new SUV is if the dealership gave it to them. And you can already see why maintaining a database of all their customers is important. Sure, they might inundate their customers with flyers and ads in the mail, but wouldn’t you appreciate the notification about potential tire explosions?

Managing master data is important, because business decisions are based on the story the company’s data is telling. Even the simplest of errors in master data will trickle down, causing magnified errors in other applications utilizing the flawed information.

Companies with non-existent or underdeveloped MDM processes often encounter finger-pointing and displaced blame as a result of data discrepancies. Data discrepancies can be seen when month-end sales reports are delivered with conflicting data in the accounting systems and manufacturing systems. Discrepancies make it difficult to determine which system, if any, has the most accurate information.

Who is Responsible for MDM?

MDM is often mistaken for data quality projects or technology systems owned by IT. Although IT may be involved in the distribution of master data, there is not one sole owner of MDM. To be successful in maintaining the integrity of critical company data, there must be a company-wide effort and to the ongoing maintenance of master data.

It is important to clearly define ownership of the components of MDM including: establishing data governance (standards around how data is used), creating an MDM strategy, and developing procedures for maintaining and distributing information to the people who need it.

A successful MDM program should include holding people accountable for maintaining master data and streamlining the sharing of critical data between departments.

How Do I Create an MDM Organization?

Improper maintenance of master data causes reporting inaccuracies and can lead to poor business decisions. The steps below should be followed to establish an MDM organization:

  1. Define which data is master data—products, customers, vendors, etc.
  2. Determine primary data sources and consumers—the CRM system owns the customer master list which is owned by the credit department and used by sales team.
  3. Designate ownership of each master data set—the AP Clerk is responsible for entering and updating vendor information.
  4. Develop data governance processes—all new product information requires review/approval from the management team prior to product entry into the accounting and manufacturing systems.
  5. Design necessary tools and workflows—technology can be implemented to help automate approvals and the flow of information.
  6. Deploy processes for maintaining master data—businesses can create templates and enforce procedures to capture requested master data updates.

Organizations large and small are faced with data challenges. Occasional focus on cleaning up critical data is not enough. Creating a comprehensive MDM strategy is the starting point to having confidence in company data. Avoid the pitfalls of poorly maintained master data by establishing processes to manage the creation of new master data and enforcing everyday practices to maintain the data over time.


Trenegy is a management consulting firm equipping energy and manufacturing companies for growth and change.

The Power of Storytelling



Making change endure and building culture are two of the hardest things a leader has to accomplish. Fortunately, there is a powerful tool available for leaders to use: storytelling. In this article, we will explore the power of stories, why they work, and the elements that form a great story.

The power of stories

Appropriately, let’s begin with a story.

Jesus lived on this earth until approximately the year 30.  But most scholars believe that the books of the New Testament were not put down in writing until the year 50 or 100. Even then, the challenge of remembering the events of Jesus’ life did not end. The stories of the New Testament were primarily passed down orally, because copying the gospels by hand was banned until the emperor Constantine allowed it at the Council of Nicaea in the year 325. For almost 300 years, the accounts of the New Testament were primarily passed on orally from person to person! Furthermore, from that point until the invention of the western printing press in 1439, the Bible was copied primarily by hand; thus, very few people owned the Bible.  Therefore, for almost 1400 years, the stories in the New Testament of the Bible were learned primarily through storytelling. I would venture to say that most people in western civilizations can recite one or more stories from the Bible. That, dear reader, is some powerful storytelling.

Here’s a more recent example. Steve Epstein was a lawyer for the U.S. Department of Defense and in charge of the Standards of Conduct Office. When he conducted training, he found that reciting the rules alone was not working. The message did not seem to “stick”. So, he created the Encyclopedia of Ethical Failures in which he collected stories of compliance failures in chapters titled “Bribery”, “Abuse of Power” and the like. Here’s a real gem from the encyclopedia:

A military officer was reprimanded for faking his own death to end an affair. Worthy of a plot in a daytime soap-opera, a Navy Commander began seeing a woman that he had met on a dating website. The Commander neglected to tell the woman that he was married with kids. After six months, the Commander grew tired of the relationship and attempted to end it by sending a fictitious e-mail to his lover – informing her that he had been killed. The Commander then relocated to Connecticut to start a new assignment. Upon receipt of the letter, his mistress showed up at the Commander’s house to pay her respects, only to be informed, by the new owners, of the Commander’s reassignment and new location. The Commander received a punitive letter of reprimand, and lost his submarine command.

This story, although better than just reciting policy statements, can still be improved. We will look more into that later in the article.

Why stories work

In 1944, Heider and Simmel conducted a study that showed that our brains are wired for stories. They played a movie to groups of students (you can see the movie here). The movie shows silent geometric shapes moving around the screen. One group was given directions to describe the story they saw. The other group was given little direction prior to seeing the movie, and then was asked to describe what they saw. Not surprisingly, there was very little difference between the two groups: they both saw the story of an angry triangle and its confrontation with a friendly triangle and circle.

This experiment shows how important putting data into the context of a story is to our brain.   In fact, 65% of our conversations are stories – primarily in the form of gossip. Why is that?

When processing facts, only two areas of our brains are engaged. These areas do nothing more than decode the words we hear into their dictionary meaning. But when we listen to an effective story, many other parts of the brain are activated. This results in interesting activities happening in our brains.

The first is neural coupling. Neural coupling allows the listener to turn words into virtual experiences. For example, smell words such as “lavender” and “cinnamon” activate the smell centers of the brain as if the listener had actually smelled them. Similarly, using active sentences such as “Bob kicked the ball” activate the portion of the listener’s motor area associated with leg motion.

The power of neural coupling lets an effect called mirroring to take place. Mirroring allows a story teller to relate personal experiences directly with the listener.  Since the motion and sensory areas of the brain are activated by action and sensory descriptive words, a story teller can almost recreate his or her reality in the listener’s brain by using those words.  When a story teller relates an effective story, the listeners’ brains are literally living those events.

Finally, the brain releases dopamine into the system when it experiences an emotional event – even through storytelling. Dopamine imprints a memory and makes the event easier to remember with more clarity. This is why saying “don’t go near that dog” is not as effective as saying “my friend was mauled by that slobbery, mean dog, so stay away.” The combination of neural coupling, mirroring and dopamine makes storytelling 22 times more effective in helping the listener retain information than data alone!

Furthermore, people are not rational beings by default. Our brain operates from two systems – sometimes called Fast Thinking (System 1) and Slow Thinking (System 2).

Fast Thinking is intuitive and uses less energy; therefore, it is the default system our brain uses. This is the system that allows you to talk while driving your car; to play the guitar without looking at the strings; or to just know that 2+2=4.

Slow Thinking is the rational and deliberate system. This system uses significantly more energy than Fast Thinking; therefore, it is not the one our brain chooses to use by default. Slow Thinking is the system which makes you think through a process, or figure out a problem. It is how the brain is operating when you’re trying to calculate 17 x 54 (which by the way is 918). Also, when you learn a new skill – think of driving a car – your brain is relying on Slow Thinking. The person learning a new skill has to think through every step, and when that person is done, they feel tired and drained. But as one is going through this learning, the brain is literally rewiring itself. New connections are made in the brain until the new skill can be performed without really thinking about it.

Again, our brains are designed to use Fast Thinking as the default. It is more energy efficient and faster. If our caveman ancestors would have had to use Slow Thinking for everything, we would not be here today. They would have been eaten by saber tooth tigers while they thought about how to react when they saw one!

These peculiarities of how our brains function are why flight simulators work for training an aircraft pilot. Without first practicing in a flight simulator, a mistake by a first-time pilot while landing in an unfamiliar airport (or on a short landing strip, or with a strong crosswind) could result in death, fire, and mayhem. But with a flight simulator, the same pilot can try the landing many times – and fail many times – without anyone getting hurt. All the while, the pilot’s brain is rewiring to be able to perform the real landing using their Fast Thinking brain.

Stories work like flight simulators on our brains. Because a well-crafted story causes our brains to activate as if we were experiencing the events (through mirroring), stories take us into intense simulations of situations that we are able to experience in parallel to our real life. We can have that rich experience without getting hurt at the end.

Stories rewire our brains much in the same way that a flight simulator does. These new brain connections then allow us to react using our Fast Thinking brain when exposed to similar real-life situations as those that we were exposed to through stories.

Connecting through stories

In his book I and Thou, the German philosopher Martin Buber states that humans are defined by two pairs: I-Thou and I-It.

Our relationship with things, the I-It relationship, separates us from that object. Our relationship with others, the I-Thou relationship, eliminates the boundaries between us. That is why, even if one is an extreme naturalist, one has no qualms over cutting down a tree for fire to warm one’s cold family.  The connection you have with your family — the I-Thou relationship – bonds us to them, while the I-It relationship with the tree separates us from the tree.

Storytelling allows us to see the subject of the story in an I-Thou relationship instead of an I-It relationship. When implementing change in an organization, simply putting lessons learned into policies and procedures keeps us separated from the results, and therefore does not create a sustainable culture. Understanding the “thou” behind a story builds a long-lasting connection.

In the medical field, many hospitals are now using this concept to avoid medical mistakes. Instead of referring to patients solely by their medical condition (believe it or not, medical staff used to call their patients names like “the broken leg in room 2”), the staff in encouraged to build the “story” behind the patient in their charts and conversations. This has been shown to help reduce the number of medical errors, since now the staff is connected to the patient through an I-Thou relationship instead of an I-It relationship.

Building a great story

The steps to building a great story can by remembered through the acronym “CAR”, which stands for Context, Action, and Results.

Context sets the background for the story. Where and when did it happen? Who is the main character? What does your character want to accomplish? Who is the villain, or what stands in your character’s way? The main character in your story needs to be someone the audience can connect to, and the villain (which doesn’t need to be a person, but could be a situation) needs to present a real challenge.

Action is the substance of your story. What does your character do? Action must include an obstacle, setback or failure.

Finally, Result is where you reveal your character’s fate. To be effective, you must subtly give the moral of the story.

To be fully effective, you must remember why stories work. Create an experience that induces mirroring by using action words, words that stimulate the senses, and avoiding clichés. Engage the audience by illustrating a real struggle. Pin your character against real villains – whether they are people, equipment, procedures, a change initiative, or whatever. Finally, admit flaws. Rosy pictures are boring.

Next time you are in a movie theater, look around you and watch the people. If the movie is effectively telling a story, you will see everyone reacting as a single organism. They will all laugh at the same time, flinch together, gasp simultaneously… A great story builds a common thread across a group of people. The story does not even have to be told to a group that is together. Think about the connection you feel with other people who also saw the latest blockbuster movie. You do not have to be in the same movie theater to share the experience.

Just as in the days of the early Christians, stories continue to serve the function of building a community through common experiences. Common stories build a culture. They encourage a group of people to behave in a unified way. Effective change management and the building of a strong culture cannot be done without the transformative power of storytelling.

Don’t Avoid the Checkup – Embrace the New Lease Standard


How many times have we cancelled our dental checkup, because we are too busy? Tooth pain reminds us to visit the dentist, and he says, “you should have come sooner….”. The FASB issued a new lease standard, Leases, (ASC 842), on February 25, 2016. Are you ready to move forward with implementing the new standard or do you want to delay until the pain is felt?

The key provision of the new FASB lease standard is that lessees will recognize virtually all their leases on their balance sheet by recording a right-to-use asset and a lease liability. This includes operating leases, having previously been recorded off-balance sheet. The existing lease standard has been criticized for failing to meet the needs of users of financial statements, because it doesn’t always provide a faithful presentation of leasing transactions. The new standard proposes to provide for greater transparency in financial reporting. Companies that lease assets including real estate, manufacturing equipment, vehicles, airplanes and similar assets will be impacted.

Public company implementation dates for the new lease standard are fiscal years starting after December 15, 2018. Non-public companies must comply for fiscal years starting after December 15, 2019. Financial executives may look at the implementation dates and be inclined to focus on projects with more immediate due dates and address the new lease standards later. Financial executives can devote some time now to analyze the potential complexity of the implementation and the impact on company resources. The analysis can help a company decide when to move forward with the implementation process and avoid unnecessary financial reporting risks.

The AICPA recommends six steps to an effective implementation of the new lease standard:

  • Assigning an individual or a task force to take the lead on understanding and implementing the new standard;
  • Updating the list of all leases;
  • Deciding on a transition method;
  • Reviewing legal agreements and debt covenants;
  • Considering IT system needs;
  • Communicating with stakeholders.

At first glance, the six-step recommendation seems simple and manageable. Before we get too comfortable with its simplicity, let’s peel back the onion with a few questions. Do you have technical accounting staff available to spend quality time understanding the lease accounting guidance and determining how it impacts your company? Do you know of all your lease contracts and where they are located? Is your company public, and how do you determine whether to transition with the retrospective (requires restatement of comparative periods in your financial statements) or the modified retrospective method (does not require restatement of comparative periods in your financial statements)? Do you have debt covenants or other legal agreements limiting debt levels or requiring approval prior to incurring additional debt? Do you utilize an IT system to manage your lease records or do you use Excel or similar process? Have you discussed the impact of the new leasing reporting standards with executive management, board of directors, debt holders, or other stakeholders?

You may not have answers for all of the questions above, and as you move forward with the implementation of the lease standard, many more questions will arise. The implementation process will not be limited only to the accounting staff. Moving to the new reporting standards will be a company-wide initiative with communication and cooperation among several departments including treasury, legal, facilities management, purchasing, logistics, and fleet management, to name a few. To achieve success with the implementation requires development of a project plan including input from a wide range of functions and requires commitment from executive management.

Companies should look at the implementation as more than a compliance project and use the opportunity to create value for their company. Below are examples of opportunities for value that may be identified during the implementation process:

  • New avenues to improved communication among different departments within the company
  • Improvement of existing internal controls and processes, updates to related documentation, and communication of improvements and changes to affected parties
  • Selection of cost efficient IT solutions to track leases and to meet reporting requirements for the lease standard
  • Consolidation of lease vendors and negotiation of improved pricing
  • Termination or buy out of stale and unneeded operating leases

Companies have the opportunity to identify additional opportunities to achieve value beyond compliance. Challenging the organization to always be vigilant in identifying value-creating opportunities in all our daily tasks is critical for continuous improvement.


Trenegy helps companies to implement new accounting guidance and to identify opportunities for companies to create value through process, controls and system improvements.

The Accounting Rule You Need to Know Before Moving to the Cloud


There are a number of factors our clients consider when evaluating the purchase of cloud software.  The main factors for consideration often include: system performance, security, data access and of course, cost, specifically which costs must be expensed and which costs can be capitalized.

Due to the recent updates of standards for intangible asset accounting, the rules for which costs can be capitalized and expensed are no longer as clear-cut as they used to be. The presumption a company can capitalize costs incurred with software implementation activities no longer holds true under every circumstance, or type of contract, when it comes to cloud software.

At the beginning of 2016, the Financial Accounting Standards Board (FASB) threw an Adam Wainwright-style curve ball to companies which are evaluating or have purchased cloud computing software. You can read the full update to the Accounting Standards Codification (ASC) 350-40, Internal Use Software here.

However, the update created somewhat of a gray area around whether a cloud computing agreement represents a purchase of software or a purchase of services.

The update states depending upon the specific language of a cloud computing contract, the purchase costs may be viewed one of two ways:

  1. as the purchase of a software license
  2. as the purchase of a service

In order to be deemed as a purchase of a software license the cloud computing contract must explicitly denote the customer is paying for the transfer of a license required to operate the software.  Otherwise, the contract is viewed as a purchase of services.

If a contract is viewed as a purchase of services, then the costs must be accounted for like any other service contract, which means all costs must be expensed when the service is performed.  The only opportunity to capitalize these expenses on the balance sheet is to book the costs as a prepaid asset and amortize them as the prepaid (software) services are used.

Being forced to expense all costs associated with purchasing and implementing new software poses a significant hurdle to potential buyers of cloud computing software. If the contract is considered a purchase of services, then implementation costs related to the software – which can often times reach seven figures – must also be expensed. The potential for taking an immediate hit to the income statement for such a large dollar amount is more than enough to give many companies pause when evaluating cloud software.

As such, many cloud software providers have also taken steps to simplify the process by moving from software service subscription fees to offering contracts based on software licensing fees. An arrangement which includes a software license is considered “internal use software” and accounted for as an intangible asset. Under the internal use software designation, the typical expense vs. capitalization rules apply and companies are allowed to capitalize and then amortize implementation costs accordingly.

New Accounting Rules Cheat Sheet
With many cloud software vendors offering either a subscription based or license based contract, it is important for perspective buyers to understand the impact to the software’s total cost of ownership. In some cases, a subscription or service-based contract may have a lower total cost of ownership. Some clients may choose to go with the service contract to lower the total amount of cash going out the door, versus other clients which may choose to pay more for a license-based contract in order to absorb the costs on the P&L over time.

To avoid any surprises with accounting for cloud software costs, we advise our clients to obtain a clear understanding of the pricing model from every perspective cloud software vendor and to take a total cost of ownership approach when making any software decision.

Trenegy assists companies in selecting and implementing the right technology solutions. For additional information, please contact us at:


Integrated Business Planning: Gimmick or the Go-To Method for Doing Business


Companies whose doors have been open for any longer than 5 minutes know teamwork is important. Each company has their own methods for maintaining communication and teamwork. Sales and Operations Planning, or S&OP, is a perfect example. S&OP is the process by which Sales and Operations work together to create one plan for a specific timeframe. Sales provides projected revenue, and Operations provides their expected production. It is a game of supply and demand and predicting equilibrium. The end product is a forecast that aligns with executives’ strategic objectives and serves as a performance measurement tool.

The inventor of S&OP, Oliver Wight, tells us there is a new concept which improves upon even the most tightly run S&OP organization. The process is Integrated Business Planning, or IBP. According to many skeptics, IBP is simply a marketing gimmick to re-brand S&OP. However, Wight confirms Integrated Business Planning was not introduced to announce the invention of a new process, but rather to reveal the considerable changes to an existing one. The focus of S&OP has shifted toward gaining a better understanding of external factors and aligning all internal functions, not just Sales and Operations.

In the new world of IBP, Sales, Operations, Logistics, HR, Finance, Marketing, and Pricing are all working toward the same goals. Some examples of the ways IBP improves upon S&OP include:

  • Stronger financial integration
  • Improved product & portfolio review
  • Addition of strategic plans and initiatives
  • Improved pricing decision-making
  • Enhanced scenario planning and risk visibility
  • Improved trust within the leadership team

IBP starts with implementing a process which works best for the company. If a company has been operating a run-of-the-mill S&OP process for twenty years, a change management plan to shift to IBP could be exactly what the company needs to take business to the next level. By making improvements to the traditional S&OP process, the company uses cross-functional data to make business decisions, set targets together, and commits to achieving the strategic plan.

A potential benefit of IBP over traditional S&OP is the ability to develop trust with suppliers and customers by including the strategic pricing equation in the planning process. IBP allows companies, and in turn, their suppliers and customers, to depend on reliable pricing and available to promise (ATP) numbers. Trust can only be established when people deliver to expectations. Pricing is a key player in IBP, as are ATP dates. Price is the translation between units and dollars enabling a common language.

Still not sure about the difference between traditional S&OP and IBP? It would not be surprising if the term IBP faded away and the term S&OP remained, but regardless, the concepts of IBP are the new gold standard. Whether a company has employed traditional S&OP processes for decades or is starting from scratch, it is important to apply the latest model. Why settle for a thing of the past when the future is much brighter?

Companies investing in IBP will notice a behavioral shift. For potentially the first time, the entire company will be moving toward the same set of strategic objectives. Ultimately the company will be able to provide a higher level of customer service, improve lead times, increase profit, and enjoy a positive impact on the bottom line.

Trenegy has years of experience helping companies to achieve their goals through integrated business planning. Please contact us at for more information.



Cyber Hygiene – How to Clean Your Online Presence


In 2000, Chinese hackers  began a 9-year hacking assault on telecommunication giant Nortel. The hackers used remote access and automated software to generate a large number of password guesses to eventually break the credentials of seven executive team members. The hackers successfully obtained critical reports, research and development materials, employee emails, and strategic information. Unfortunately, Nortel’s top executives neglected to secure their network and eventually declared bankruptcy in 2009. As Nortel disintegrated, Chinese telecom Huawei grew, with some speculating “Huawei’s rise was at the expense of Nortel.”

Nortel’s downfall raises awareness of the devastating consequences that are the result of a cyber-attack. However, there is a growing tendency to generalize cyber-attacks as simply “cyber-attacks,” leaving us numb to the term rather than educated. This makes all cyber-attacks seem like nebulous boogeymen. In fact, there are many different types, and taking these threats seriously is the first step in preventing them.

types of cyber attacks

Everyone must evaluate their online behavior and become hyper-vigilant about their cyber hygiene, the measures taken to ensure one’s “health” and safety online. Cyber hygiene begins now, with improving passwords, enabling two-factor authentication, installing anti-virus software, and routinely scrutinizing potential online threats (like the ones mentioned above).

Business leaders must invest time and money into their organization’s cybersecurity strategy by first training employees to maintain good cyber hygiene. Many executives and board members remain hesitant to spend millions on cybersecurity. However, cyber-criminals take $400 billion per year from companies, with much of that theft going undetected. Technological solutions are simply not enough to prevent a cyber-attack. Making employees aware of the threat is crucial.

Three things companies must do to immediately implement an adequate cyber hygiene program:

  1. Set tone at the top
    1. Executives are responsible for setting the company culture – when they support a cybersecurity initiative, the company follows
    2. A CEO who takes cyber security seriously will influence his or her employees to do the same
  2. Make cybersecurity a part of the office conversation
    1. Discuss cybersecurity measures regularly – learn from Nortel’s mistakes and make employees aware of the dangers
    2. Create a best practices document with instructions for changing passwords every 90 days, updating anti-virus software and other apps, protocols for downloading 3rd party apps on work computers, etc.
  3. Understand and limit access
    1. Know which employees have access to workstations and keep this information up-to-date (expired accounts are targets for hackers)
    2. Minimize attack exposure by limiting access to only those who need it

Personal cyber hygiene is equally as important. Business and personal information are intertwined, and it is nearly impossible to untangle the spider web when a cyber-attack occurs. Many people manage their work and personal lives on the same smart device. Protecting one’s cell phone is just as important as protecting one’s work computer. It is essential to know what apps are on smart devices, what personal information they require before downloading, and what the potential risks are in having those apps. Scrutinizing emails for suspicious activity on phones and home computers is also important. Essentially, any cybersecurity strategy employed in the workplace carries into the home and impacts personal devices.

It is time to be more cautious on the internet. Technology has come a long way and the most reliable security guard for your information is you.

Kaizen or Just Bringing in Lunch?


Kaizen has been hailed as a quick ‘cure all’ for any ailing business problem. Companies like Toyota, Bacardi, and Nestle use Kaizen in their businesses. Yet for some companies, the go-to approach is to lock employees in a conference room with the  hopes of arriving at a better solution before they are released. Is Kaizen the panacea it claims to be, or simply an excuse to bring in lunch?

In Japanese, “Kaizen” means  “improvement.” Dissecting its Kanji reveals “change” (Kai) is “good” (Zen) – more or less. The Kaizen approach focuses on small improvements under the theory that over time, these small changes could result in massive benefits to a business.

The concept of “Kaizen” was first introduced during World War II under a United States program called “Training Within Industry,” or TWI. The TWI group encouraged small, incremental improvements over transformational changes. Eventually, W. Edwards Deming (the management consultant responsible for TWI) was recognized by the Emperor of Japan for introducing the concept of Kaizen to the Japanese workforce.

A typical Kaizen master leads the room through six steps, including:

  1. Establishing the reason for the workshop. Why are we here? What is the scope?
  2. Understanding the current “state.”
  3. Serving boxed lunch.
  4. Developing a future “state” vision.
  5. Creating a timeline and ownership for each step.
  6. Recognizing everyone’s participation with a ribbon

Kaizen is commonly used in manufacturing companies since the concept is an essential part of “lean manufacturing;” and is associated with the Toyota Production System (TPS). The TPS is well-known for defining standards of eliminating waste in production and unnecessary stress in the workplace. However, any industry, any job function or process, can benefit from Kaizen.

Kaizen is most beneficial when:

  • It is seen as a proactive approach to identifying incremental improvements
  • There is a shared mentality that the current state can always be improved upon
  • Continuous improvement is embedded in company culture
  • It is supported by upper management
  • Employees have an outlet to submit Kaizen suggestions

The main drawback to a Kaizen event is “groupthink.” Basically our minds become influenced and confined to the ideas generated in a group. The loudest person in the room generally drives the conversation. The Harvard Business Review describes the benefits of convergence and divergence well: in a group session, individuals need some time to brainstorm ideas on their own before coming back to the larger group. Once the group is reunited the individual ideas are discussed and narrowed. The process continues until a solution is achieved. We find that when people feel they have a direct impact on company change, they are more likely to participate in and advocate change.

Because Kaizen is based on the idea of continuous improvement, it is most effective for small, incremental changes. Standardizing templates for Accounts Receivable or updating a work ticket are great challenges for Kaizen to address.

For more complex business problems or if “groupthink” becomes an issue, there is a different approach: the ACE Method. ACE (which stands for accelerate, collaborate, and execute) is a workshop that effectively finds solutions for anything from brainstorming a new company logo to a complete overhaul of the company’s bid to bill process.

Trenegy developed the ACE Method to address problems quickly. A typical workshop lasts about two weeks and a clear roadmap for action is developed in that time. To combat “group think,” ACE employs a trained moderator to encourage that there are no bad ideas. Even a thought that seems unhelpful initially could spark a brilliant idea in another participant. ACE uses convergence and divergence, as the Harvard Business Review advises, to brainstorm ideas and improve upon them.

Think of Kaizen as a way to tackle the small issues and ACE as a way to tackle larger challenges, like mergers. Both methods have their advantages. And both are worth far more than just a free lunch!

A New Frontier: Securing the Internet of Things


The IoT is a new frontier. Projected to surpass 50 billion objects by 2020, with the potential to boost the global GDP by $142 trillion, the Internet of Things offers private consumers the ability to create app-controlled “smart homes” and offers businesses unprecedented access to real-time operational data monitoring, collection and analysis. With the rapidly-evolving, demand-driven industry of IoT devices, technologies are being introduced faster than they can be protected.

For individual consumers, IoT security breaches have the potential to violate privacy, steal personal information, and generally terrorize unsuspecting people by manipulating their home devices. On the industrial or business side of the IoT, the threat of a hack presents far more widespread consequences. Unsecured IoT devices present a perfect opportunity for hackers to wreak havoc by compromising operational and/or safety data being tracked by IoT devices, causing a Distributed Denial of Service (DDoS) incident for customers (like the October 2016 Internet Outage), and accessing, compromising, or ransoming financial systems and data by using connected IoT devices to infiltrate the corporate IT firewall. It is becoming increasingly clear to private consumers and businesses that more focus needs to be dedicated to securing the IoT.

  • Manufacturers and consumers have focused on “ease of use” over security- The vast majority of IoT devices are designed with “ease of use” as the first priority, which traditionally means security must take a back seat. In the name of “ease of use,” many of these devices do not require a username and password reset at the time of setup, relying instead on a manufacturer-provided default username and password. These devices will remain actively connected to the internet without additional credential input indefinitely. These default settings are about as secure as having no password at all.
  • IoT Devices often fall outside of the Corporate IT Cybersecurity Structure– IoT devices are typically categorized as Operational Technology and therefore, managed by Operations departments. They are often excluded from the corporate IT strategy. When employees connect unsecured IoT devices to company-provided workstations, they inadvertently provide a direct portal for hackers into a company’s secured IT environment.
  • Physical Security is often impractical – In traditional IT security, physical security is one of the basic tenants. IoT Devices may be spread all over the world, on oil rigs or remote sites, making isolation impossible. By nature, IoT devices are easily accessible, residing in common operational areas of businesses, or common living areas of homes.
  • There is currently no “McAfee” equivalent– The average internet user knows that it would be reckless to leave a computer vulnerable without the protection of anti-virus software. However, this type of software has yet to be developed for most IoT devices. This means that not only are most of these devices unprotected—but that they are also unmonitored. Devices could be hacked, and the end user would never know unless the hackers make their presence known. A potential solution would be for each manufacturer to develop security software for its own devices. But the IoT is made up of thousands of devices by thousands of manufacturers, and these companies do not have the expertise, nor the motivation, to develop this kind of software.

Inevitably sufficient security measures will be developed, but these developments will take time. Until then, here are several ways consumers and companies can keep hackers at bay:

  • Set Strong Usernames and Passwords – The easiest way to secure IoT devices is to change from the factory default credentials to a strong, unique username and password. Some devices are difficult to change, and some offer no credential change functionality. If a device does not appear to offer a credential change option, contact the manufacturer to be sure. If in the market for a new IoT Device, the ability to change credentials should be a critical measure when choosing between products.
  • Bring IoT Devices under the responsibility of IT– While Operations will remain the primary end users of Industrial IoT Devices, the security of these devices must be included in the corporate IT cybersecurity structure. Whenever possible, bring IoT devices behind the corporate firewall and ensure that IT tracks and deploys any updates provided by device manufacturers.
  • Educate employees/users about IoT Security – As in general cybersecurity, the greatest defense against hacking is a well-educated user base. By informing employees/users about the threat of IoT hacks and how they can prevent them through proper device setup and use, companies can minimize the risk of a hack occurring.
  • Prioritize increased security features– End user demand will drive manufacturers to improve security features and software companies to develop an anti-virus program for IoT Devices. As long as consumers continue to purchase devices with no regard for their security, manufacturers will continue to produce status quo. If currently-owned devices offer insufficient or no security features, consider upgrading to newer, more secure options. Consumers should continue to voice their security concerns in the marketplace, and when in the market for new IoT devices treat cybersecurity as a top priority.

As the technology community begins to unravel and understand the concept of protecting vast amounts of personal data, IoT users must remain vigilant about securing their own devices. Increasing dependence on internet-connected objects makes securing them a top priority. While alluring, the new frontier of the IoT could leave many people very vulnerable.