Blog
Laptop generating analytics

What Does BI Bring to Manufacturing - Part 1

The Dresner Advisory Services 2017 “Wisdom of Crowds Business Intelligence Market Study” found that the only market sector with less penetration of business intelligence solutions than manufacturing was higher education. According to the US Bureau of Labor Statistics, manufacturing, both discrete and process, has more stored data (well over 1,500 petabytes) than any other industrial or business sector.


What Does Business Intelligence Bring to Manufacturing?


Part 1 – Overview of Business Intelligence


I supported myself through college working in restaurants. Dishwasher, busboy, waiter, maître d', pastry chef, and bartender, all these roles taught me one thing: the best restaurants operate on recipes—recipes for food and drinks, obviously, but also recipes for training and good service. I often think of business intelligence as a recipe, composed of:

  • One part good assistance from the IT department
  • One part guidance from the internal owners of the enterprise software applications
  • One or more parts of good, clean data

Blend together, and pour into an analytics dish. Glaze with a mix of interpretation, insight, and uncertainty, and ensure the visual presentation is pleasing to the eye. Serve to the customer, explaining the preparation and addressing any questions. Observe the customer at a discreet distance to ensure the dish elicits a level of satisfaction and contentment in the desired outcome.

In short, business intelligence is the process of transforming data into actionable insights.

Some of the benefits of using BI in Manufacturing include:


  • Real-time tracking of direct and indirect costs from raw materials to finished goods. This allows margin slippage to be detected during manufacturing and not days after the product is shipped.
  • Increased machine uptime. Data from embedded sensors triggers alarms signaling that a potential cause of failure has occurred.
  • Ability to determine the true profitability of each product (and each customer) as all costs associated with a product, including cost of sales and marketing, become more visible.
  • Real-time tracking of product development costs and performance to schedule.
  • Real-time detection and reporting of product quality problems and out-of control conditions.
  • On-demand calculation of Key Performance Indicators (KPIs).
  • More accurate sales and supply chain forecasts.

Although I worked in manufacturing for over 30 years, I wasn’t exposed to the term “business intelligence” until several years ago, and discovered that I had been doing BI all along. Let me explain: Over the years I had various titles—Manufacturing Manager, Director of Manufacturing, and Director of Lean Six Sigma. My favorite one however, was “Data Dude,” an informal title bestowed on me in the late 1990s by a senior executive in the company I was working for at the time. Maybe not a title to put on a business card, but it was the best description of what I did. With that title, my name always came up in hallway conversations when executives had requests for what would now be called business intelligence. These requests were usually of two types (which is one way of categorizing BI outputs):

Periodic
  • CEO – ‘I need a monthly dashboard of our key financial metrics including projections through the end of the year.”
  • CFO – “I need a quarterly report on customer profitability that factors in customer acquisition, account maintenance, and support costs.”
  • VP of Operations – “I need a monthly report that shows rework, returns, and warranty costs for each of our products.

Ad Hoc
  • VP of R&D – “I need a report that estimates break even time for the three new products we just launched.”
  • VP of Sales – “We need to increase the size of our salesforce if we want to meet next year’s revenue goals. How can we figure out how many salespeople we need, and in which territories and market segments?
  • Director of Manufacturing “We just received a large order from a key customer. Given our current backlog, I need to know if we have the capacity to accept the order and deliver it on time.”

BI Processes—Then and Now


To produce the requested outputs, my typical process looked something like this:

  • 1. Locate – Find out if the data exists in one or more of our enterprise software applications (not surprisingly, the data was usually scattered across multiple applications).
  • 2. Extract – Query each application and output the results into individual Excel spreadsheets.
  • 3. Clean – Look for and correct any “dirty data” (more on this in a future blog).
  • 4. QA – Double and triple check the corrections.
  • 5. Merge and Integrate – Once corrected, merge the spreadsheets, and integrate the data (often not as easy as it sounds).
  • 6. Analyze – If necessary, analyze the data with a statistical software package (Minitab, JMP, Statgraphics, etc.), assessing the statistical uncertainty of the results. (The more uncertainty, the greater the risk when using the data to address a key business issue.)
  • 7. Interpret – Determine what insights can be gained from the data.
  • 8. Publish – Create a report, dashboard, or analysis summary, attaching any insights, a user-friendly assessment of risk level (low, medium, high), and recommended actions.

BI has come a long way since the cumbersome process I and others used in the 1990s, but the definition of business intelligence remains much the same. Ultimately, we want to transform data into actionable insights'. These days, BI process is driven much more by software, gathering fragmented data from enterprise software applications and various external sources, including customers, suppliers, industry market sources, CRM, social media, and many more.

Compared to the process I used above, modern business intelligence relies on sophisticated BI software (frequently employing multiple software packages in sequence) to produce those actionable insights, making the process more efficient, but also more complex. The typical BI process used today might look like this:

  • 1. Connect to the data in Enterprise Applications (CRM, ERP, Accounting, etc.), or to external sources like social media, supplier data portals, census data, etc.
  • 2. Extract the data from those applications (it can be all of the data or just the data needed for a particular output).
  • 3. Prepare the data for analysis by cleaning it and transforming/converting it into a format that can be analyzed.
  • 4. Load into a secure cloud-based data warehouse or other system—sometimes the order of preparing the data and Loading is switched. (There is considerable evidence suggesting that 90% of the work to produce actionable BI—and 90% of the time spent by Data Scientists—is successfully executing these first four steps.)
  • 5. Query the data (includes both traditional queries for structured data and machine learning algorithms for unstructured and structured data).
  • 6. Analyze the outcomes of the queries with statistical and data science tools, using one or more of analytical modes to fit the task (More on this in a bit).
  • 7. 'Publish the analysis in an easy to understand visual format, including a user-friendly assessment of risk level and a list of recommended actions to take based on the data.
  • 8. Update the BI outputs by continuing to extract, transform, and load data from the internal and external sources.

Modes of Analysis


During the analysis step, we might perform one or more different types of analysis, which are worth breaking out a bit.

Data Analytics Model The image above shows five stages of analytics, with each stage providing greater value to an organization, but becoming progressively more difficult to execute.

Descriptive Analytics (What happened?)


  • Visualize historical data in a graphical format (sales, on-time Shipments, customer complaints, machine downtime, etc.) allowing you to look for patterns or trends in the data. The output could be a single graph or chart, or all your KPIs on a single dashboard.
  • Visualize the relationship between data sets or variables (Sales by Territory, Sales Variance to Forecast, Rework Dollars as a Percent of Sales, etc.).

Diagnostic Analytics (Why did it happen?)


  • Visualize historical data and external events to discover correlations that might suggest the cause or causes of a “negative” or “positive” event, such as a sudden drop or increase in sales, a machine breakdown, worse or better than expected yields, throughput, or margins, or a negative or positive change to a KPI.

Exploratory Analytics (What else is happening?)


  • Relies on Machine Learning and other data science tools to look for patterns and trends in large volumes of data.
  • Data can either be structured by type as in a traditional database, or unstructured as in social media posts.
  • Examples where this type of analytics could be used in manufacturing include looking at data across multiple manufacturing plants that produce the same products to find best practices, to analyze materials and products from multiple suppliers to find the one that has the ideal balance of cost and quality, or to discover customer satisfaction trends or insights from social media posts.

Predictive Analytics (What could happen?)


  • Performs a statistical analysis of past internal and external data to predict future events. With large data sets, data science and Machine Learning can determine the best predictive model for future events, and as new data is captured, continually improve that model.
  • Among other things, this type of analytics can improve sales and supply chain demand forecasting, predict machine breakdown, or forecast shortages of a purchased component or material.

Prescriptive Analytics (How can we make it happen / prevent it from happening?)


  • This type of analytics uses internal or external data sources to predict the probable outcome of a decision before that decision is made, allowing companies to choose a direction, strategy, or tactic that minimizes risk and maximizes reward.
  • Prescriptive analytics is a data science discipline comprising a highly complex combination of business rules, algorithms, artificial intelligence, machine learning, and statistical modelling, and is still in its infancy.
  • In manufacturing, prescriptive analytics can be used to determine the best plant layout for optimum capacity, to determine which products to run in what sequence to optimize utilization of resources and on time delivery, or to determine the ideal level of inventory to balance cost and demand.
  • Market Research firm Gartner reports that only 10% of organizations use some form of prescriptive analytics, with usage increasing to 35% by 2020, primarily due to the growth of the Internet of Things (IoT).

Using Clean Data


If data continues to need cleaning, organizations will typically institute a data governance process to provide rules, policies, error-proofing, and oversight of data entry to reduce or eliminate dirty data.

When data continually needs to be cleaned, the latency between data entry and final BI Output can harm an organization’s ability to react quickly to changing conditions. Here are some sobering statistics: IBM estimated that in 2016 bad data cost US companies $3.1 trillion dollars, and according to market research firm Gartner, poor data quality costs organizations an average of $13.3M per year. (A future blog will discuss good data governance in detail).

BI, Analytics and Data Science


There is some disagreement, and even controversy, about the definitions of business intelligence, analytics, and data science. Following the above process, I think a useful way to define the relationship between these disciplines is this:

“Business intelligence is an actionable objective that is achieved through data analytics using the tools of data science."

I hope I’ve provided a concise overview and good definition of modern business intelligence. In the next blog, we’ll do the same for data science and the work performed by data scientists.

Jeremy Green, PhD
Lean Six Sigma Master Black Belt
Are You Ready to Start Your Project?