Effectiveness of Systems Engineering Techniques on New Product Development: Results from Interview Research at Corning Incorporated

Abstract

In new product development (NPD) in the commercial (as opposed to military/aerospace) environment, many firms express confidence in the value of applying systems engineering (SE) techniques to the NPD process, even though there is little research to date that systematically evaluates the benefits of SE in the commercial setting. The goal of this paper is therefore to address this gap in the research by testing for SE impact across multiple projects, in this case within a single enterprise, namely Corning Incorporated. To achieve this goal, a joint team from the Systems Engineering Directorate at Corning Incorporated and the Systems Engineering Program at Cornell University conducted interview research to test for systems engineering (SE) effectiveness in product development in a commercial setting. The team conducted 19 interviews of systems engineers and project managers within Corning to evaluate the extent to which they used a range of systems engineering techniques, and the effectiveness of those techniques in improving project performance. The results from the interviews showed that for four selected areas of SE techniques (market, requirements, validation/verification, and trade studies), use of SE could be detected across projects that covered a broad range of Corning’s markets. Furthermore, an association was found between SE input and project performance. Of the 19 projects, 3 had superior project performance, and of these 2 out of 3 had “above average” scores in terms of the extent of SE use. At the other end of the spectrum, 2 out of 19 projects were judged to have “struggling” performance, and in both cases project difficulties were traced back to shortcomings in the use of SE that in turn resulted in low scores in one of the four SE areas. These findings support industry’s general intuition that early investment in the systems approach in NPD pays off in terms of better project outcomes. At the end of the paper, content analysis of quotes from interviews captures project managers’ perspectives on applying systems engineering, and the concluding discussion suggests ways the study of SE effectiveness might be extended to other enterprises.

Share and Cite:

Vanek, F. , Jackson, P. , Grzybowski, R. and Whiting, M. (2017) Effectiveness of Systems Engineering Techniques on New Product Development: Results from Interview Research at Corning Incorporated. Modern Economy, 8, 141-160. doi: 10.4236/me.2017.82009.

1. Introduction

Across many disciplines of engineering, there is widespread confidence in the benefit of using systems engineering (SE) as a core component of the new product development (NPD) process. However, this confidence can benefit from further research to expand the empirical evidence connecting SE practice with project success. Such evidence would help systems engineers to better make the case within the enterprise for allocating time and financial resources to SE activities. It would also help systems engineers to better understand the connection between SE and project/program performance, for example to learn what SE techniques are most effective in what situations.

Our goal, therefore, as members of the Systems Engineering Directorate at Corning Incorporated and the Systems Engineering Program at Cornell University, is to address this lack of empirical evidence to support the effectiveness of SE by systematically studying multiple NPD projects to evaluate their reliance on SE and its relationship to project outcomes. Our research is carried out through a collaborative study of projects within Corning Incorporated. Earlier we reported on the literature review stage in the INCOSE journal Systems Engineering [1] , finding isolated evidence of SE effectiveness in a number of locations, and some systematic studies from the military/aerospace sector, but no systematic studies focused specifically on commercial NPD. This paper deals with the second part of the project, namely presenting the results from survey interviews of systems engineers and project managers within Corning in which our team looked for correlation between the application of SE and project performance. The full final report for this project is available to interested readers electronically (see Corning Systems Engineering Directorate [2] ). Also, preliminary findings from the interview stage of the project were reported earlier in INCOSE Insight [3] .

This paper is organized as follows: in Part 2, we provide background for the interview research, focusing on the findings from the literature review and how they have informed the interview methodology. Part 3 describes the interview and project methodology in general, Part 4 presents the findings from applying the methodology to Corning, and Part 5 presents content analysis of interviewees’ responses. Lastly, Part 6 discusses insights about the methodology gained from the application to Corning, as well as ways to extend it in the future.

2. Project Background

The decision to study the use of SE in general and its effectiveness within Corning in particular came out of collaboration within the INCOSE Finger Lakes Chapter. The Cornell University Systems Engineering Program has existed since 1998 and the Systems Directorate at Corning since 2005. During this time, the Cornell program has been developing relationships with regional partners that have a strong interest in SE, such as Corning, and in 2006 a decision was taken to study the use of SE within Corning, both to help Corning with their internal efforts and also to allow Cornell to make a more general contribution to the body of knowledge surrounding SE effectiveness.

Our research was carried out in the context of an ongoing discussion in the literature about the importance of effective process both in the design process generally and for SE practice in particular. It is self-evident that for a design solution in the commercial NPD space to be successful, the purpose it is trying to address (in other words, the problem it is trying to solve) must be understood correctly by the NPD project team. However, finding the correct approach to connecting purpose to solution is not a trivial task. One possible solution is provided by Corson [4] , who divides the design process into three distinct elements: 1) Purpose, 2) Process, and 3) Product, as shown in Figure 1. According to this structure, “process” is the connection between “purpose” and “product” in both directions, i.e., creating a design from a need in one direction, and then verifying that the design fulfills the need in the other. While much system engineering work is concerned with the product and whether the product meets the requirements of the purpose, deliberate consideration of the process is an essential

Figure 1. Relationship between purpose, product, and process (After Corson, 2009 [4] ).

component as well, since failure to deliver a product that satisfies the purpose can often be traced to problems in the structure or execution of the process that connects the two. The reader should be aware that difficulties in determining whether or not a project has “failed” confounds the capacity to determine causes of failure; for example, Pinto and Mantel [5] found that “the precise way in which failure was defined” contributed to project outcome.

In a similar vein, Sheard [6] divides SE applications into three categories: 1) Discovery, 2) Program, and 3) Approach. Whereas “discovery” (e.g., novel applications of complex technologies, such as a new type of deep space mission) and “program” (e.g., mature complex technologies such as large commercial jet aircraft) applications focus on complex systems with an emphasis on the product, i.e., the systems engineer is focused on capturing and designing the physical and logical relationships within a complex system, an ‘approach’ application is often (though not exclusively) focused on evaluating and improving the process of developing new products or systems that solve a problem. Thus, our interest falls primarily in the area of “process” according to Corson’s categorization, and “approach” according to Sheard’s.

Finally, we note that the nature of business in which a firm operates, and the firm’s strategy within that sector, will strongly influence how the firm applies SE. In this regard, Corning is a “keystone component” provider that specializes in materials science solutions that in turn enable the success of their customer’s products in the marketplace (e.g., consumer electronics and automotive products). Strategically, Corning undertakes a diverse range of projects with the goal of achieving breakthroughs and major revenue streams on some subset of those projects. Thus the application of SE at Corning will be different from, for example, an original equipment manufacturer (OEM) that operates in a mature, well-defined product market.

Literature Review

In an earlier phase of our research [1] , we conducted a literature review of both the SE literature and of papers and reports from related fields that might include discussion of techniques common to SE, such as Total Quality Management or Six Sigma. We noted several instances where SE was found to have contributed to improved new product development (NPD) success in the commercial world. For example, Loureiro et al. [7] studied the development of two powertrains in the automotive industry and found that the powertrain development process that used more SE tools finished sooner and required fewer resources. Similarly, Staley and Warfield [8] describe the advantage gained by implementing a systems engineering framework for vehicle development at Ford in the 1990s. (The reader is referred to the earlier literature review for further details.)

These examples reaffirmed the value of SE in developing new products for the commercial marketplace. The literature review did not, however, uncover any studies that systematically evaluated SE effectiveness across multiple commercial-world projects, either within a single enterprise or across multiple enterprises. Nor did it uncover studies that evaluated longitudinally the benefit of introducing SE across multiple projects or enterprises, i.e., looking at the before- and-after effect of such a change. As part of the literature review, we also reviewed publications in the military/aerospace domain, which were found to be more advanced in terms of testing for and documenting SE effectiveness. This body of evidence included both case studies of individual instances [9] and surveys of multiple projects that showed correlation between SE input and project performance (e.g., Honour [10] , National Defense Industry Association [11] ). The publication from NDIA was particularly useful to us, not only as evidence of SE effectiveness but also because its methodology was adaptable to our application, and provided a starting point for generating our own.

3. Methodology for Interviews and Post-Interview Analysis

The framework for conducting the research eventually developed by the Corning-Cornell team entailed interview research in which the Corning side of the team would first identify NPD projects for study. Thereafter, the Cornell side, as impartial researchers, would conduct on-site interviews of NPD team participants (hereafter referred to as “interviewees”) to assess the extent of use of SE and to evaluate project performance. These interviewees typically had the title of either “project manager” or “systems engineer.”

The specific process used to conduct an interview was divided into preparation, interview, and post-processing stages. In the preparation stage, we sent the interviewee a 2-page pre-read document to introduce the study and followed this witha brief introductory phone call. The interview stage took place at the company’s location and took approximately two hours. The post-processing stage entailed analyzing the responses from the interviewee as well as occasionally requesting follow-up data.

Our interview process observed the following guidelines:

・ A non-disclosure agreement (NDA) provided reassurance to interviewees that they could openly share observations from their projects. Verbatim recording of interview through audio or video means was disallowed. All results reported in this paper have been generalized so that they do not reveal sensitive information about new products.

・ The focus of evaluating SE effectiveness was limited to practices observed in current or completed projects, without any attempt to initiate or expand the use of SE in a subset of the projects and then test for changes in performance. We considered introducing SE techniques to projects that were not currently using them and comparing this subset of projects to a control group in which no changes were attempted, but ultimately judged this to be too complex and needing of too great a lead time, given the time constraints on the study.

・ We focused on observable practices, not knowledge of SE terminology. Since many interviewees had little familiarity with the language of systems engineering, we tested for the presence of specific techniques in a project (e.g., market analysis, tradeoff analysis) without regard for the particular name the respondent gave to the technique, or even if the respondent was aware of applying a technique.

・ We also took steps to avoid “telegraphing” to the interviewee the underlying SE technique that we were trying to detect, so that the interviewee would not be able to skew the results one way or the other, based on their opinion of that technique, their desire to promote or discourage it, or other subjective factors. We asked each question about the project independently, without indicating whether it had to do with market analysis, requirements engineering, and so on.

・ When posing questions, we asked interviewees to display documentation to support their answers in regard to their use of SE techniques, usually by opening and displaying electronic files related to the project. For example, if the interviewee were discussing the extent of their “competitor analysis”, they might present a PowerPoint slide showing a comparison of offering between competitors’ products in the marketplace and their own proposed new product. On occasion, as part of this post-process evaluation, we followed up with interviewees to arrange to be shown additional information that might help us to better score the answer to a question. We viewed this documentation not to evaluate the technical content of the project but rather to evaluate the quality of the NPD process in terms of depth of use of SE techniques. We judged this approach to be more accurate than use of anonymous questionnaires seen in other studies (e.g., NDIA [11] ), where the respondent could make their own determination about the level of different types of SE input without the study authors being able to review any documentation.

・ The Corning Systems Engineering group took care not to pass judgment along in advance to the interviewers about how a given project was performing, since such information might prejudice the way the interviewer poses questions or conducts the interview.

・ Interviews were used exclusively for information-gathering, with interviewers taking notes on the information drawn from the documents shown, and capturing in a condensed form the gist of any statements the interviewee made either supporting or discounting their use of SE techniques. Scoring of the project for either SE input or project output was not conducted during the interview, so the interviewee did not feel pressured by the knowledge of their ongoing interview score.

・ After the interview stage of the project was complete, we conducted an affinity grouping analysis of all the collected statements. These grouped quotations were summarized with group headings that would provide a qualitative, anecdotal perspective on the quantitative results of the research.

3.1. Development of Interview Question Content

Our process for developing the content of the interview entailed selecting areas of SE of greatest interest and then developing interview questions that would indicate their presence or absence in the NPD process. Specific areas of SE content were chosen from Honour and Valerdi’s [12] ontology of systems engineering. Their ontology (as presented in Table 5 of their paper) includes a structure of eight areas of systems engineering that are common to all or most of the major published SE standards, including ANSI-EIA/632, IEEE-1220, ISO-15288, CMMI, and MIL-STD-499C. As an indication of how consistent the standards are, we note that out of five standards, two of them incorporate all eight topical headings, one incorporates seven, and two incorporate six. These eight areas are similar to the list of 11 areas published in Chapter 4 of the INCOSE Systems Engineering Handbook [13] , although the INCOSE list of areas includes some additional topics such as “transition”, “operation”, and “disposal.” The list of SE areas is also similar to that of the Systems Engineering Leading Indicators Guide [14] from the Lean Aerospace Initiative (LAI), which outlines a list of 13 areas that should be tracked for effective SE. In principle, our methodology could use either the INCOSE or LAI frameworks as a starting point for choosing a subset of SE categories as a focus for comparing projects without substantively changing the approach.

The eight SE categories are shown below in Table 1, along with examples of content belonging in each category and an indication of whether or not the category was adopted for the study within Corning. As shown in Table 1, the four headings chosen are “market analysis”, “requirements analysis”, “technical analysis”, and “verification & validation.” The heading “market analysis” substitutes for the term “mission statement” used military/aerospace in standards such as

Table 1. Description of SE categories available for inclusion in interview content.

Source: adapted from Honour & Valerdi [12] , Table 5. Note that where the original table uses “Mission definition”, we use “Market analysis” as being more appropriate for commercial world applications.

MIL-STD-499C, reflecting our focus on commercial applications as opposed to large-scale systems procurement. The number and content of individual interview questions can be tailored to the specific enterprise or enterprises in which the methodology is being applied. This variability can be seen in Table 5 of Honour and Valerdi’s [12] paper, as the individual questions asked under a topical heading varies between SE standards.

In our case, we developed four questions each for “market analysis” and “technical analysis” and three questions each for “requirements analysis” and “verification & validation”, for a total of 14 questions. As an illustration, the four questions for “market analysis” were the following:

1) “What evidence can you present of market analysis, including total market size, market segmentation (by geography or customer type), target share, and/or market testing”.

2) “What evidence can you present of customer analysis, such as customer surveys?”.

3) “What evidence can you present of competitor analysis, such as identification of price, technology, or growth leader; or the assessment of the potential role of major competitors?”.

4) “What is the most recent version of the value proposition for the product that you have presented, for example at the last progress or stage gate review?”.

In response, an interviewee might present for each of the points 1) a figure or table of market segmentation, 2) results from a survey of prospective customers for the product, 3) a comparison of offering (COO) showing how their proposed product compares with existing products in the marketplace in terms of several key characteristics (e.g., unit price, durability, etc.), and 4) figure or diagram showing value proposition both to the prospective customer and to Corning, respectively. As long as the evidence was of a sufficiently high quality, the interviewee might earn the maximum of 1 point for each question or 4 points total for the market analysis area. However, if in the judgment of the interviewer the evidence was marginal or non-existent, they might earn half points or zero points in some areas, leading to a lower score. A brief description of all 14 questions can be found in the Appendix at the end of this paper.

3.2. Assignment of SE Input Score and Evaluation of Project Performance

Immediately following the interview, the interviewer scored each of the 14 questions. Scores for each of the four SE areas were then tallied and a percent score out of a maximum of 25% calculated for each area (25% if the project earned all available points, 12.5% if they earned half of the available points, etc.), so that each area would contribute the same amount to the project’s overall score. Summing percent scores earned in each SE area led to an overall percent score for the project, with higher scoring projects having larger amounts of SE input.

The evaluation of project performance was divided into two parts:1) verification of internal measures: such as adherence to schedule, annual budget, or allocation of human resource, and 2) evaluation of the project’s ability to meet the expectations of both Corning and the various clients. In the first part, adherence to time or resource allocation does not guarantee a satisfactory or superior outcome, but failure to adhere may signal problems in a project that will be further revealed through examination of the history of the project and eventual product delivered, if any. The second measure for project evaluation was the satisfaction of Corning’s internal goals and customer goals external to Corning. According to our standard of evaluation, products were classified as meeting expectations (“satisfactory”), exceeding expectations (“superior”), or falling short in some way (“shortfall”). An NPD project that develops through all stages to a mature, financially successful product is of course one that meets both goals, which therefore earns a performance rating of either “satisfactory” or “superior.” A project that is terminated before reaching market maturity, however, is not automatically considered to have failed expectations. That is, if the decision to terminate is made in a timely fashion, if it leads to intellectual property (IP) that can be used later, or if it spawns new projects, it can be classified as meeting expectations, according to this system. This evaluation is consistent with the Product Development Management Association approach to allocating resources to projects: fund diverse projects, advance those that are promising, and promptly remove resources from those that are not, to make way for new opportunities [15] . In a cutting edge technology firm like Corning, with its stated strategy of seeking breakthrough keystone technologies, it is necessary to try many diverse technology concepts. Success is measured in being able to accurately, and without great delay, discern which ideas are the most promising.

At one end of the spectrum of measuring project performance, exceeding expectations implies a project that truly stands out in terms of its market success. At the other extreme, project shortfall can come about in several ways: 1) failing to reach maturity in the marketplace despite a prolonged period spent in early- stage development, 2) failure to advance due to problems in the collaboration within the firm, or 3) reaching the marketplace but failing to realize full commercial potential due to critical problems that went unaddressed during the development phase. In instances where at the time of the interview the NPD process was ongoing and we could not make a definitive assessment of project performance, we were able to contact interviewees up to 18 months later to gather additional information about the project outcome and assign a rating.

4. Results from Interview Process within Corning

Between April 2008 and March 2009, we completed 19 interviews for 19 different NPD projects within Corning, involving 22 different PMs and SEs. (For some projects, we interviewed two project team members, while in one instance we interviewed the same PM for two different projects.) In terms of demographics, we interviewed one person with less than 10 years of experience, 11 persons with 10 to 19 years of experience, 9 persons with 20 to 29 years, and four with 30 or more years. The 19 projects were chosen by the Corning authors from among 80 candidate projects with which the Corning SE Directorate interacted. The primary criterion was a desire to study a wide range of projects that would provide data across many businesses, products, project sizes and technologies. Secondary filtering was done based on more pragmatic considerations such as availability of project leaders for lengthy interviews within the time window the study was conducted (the NDA stipulated the duration of the interviewing window.). Ideally a larger fraction of the 80 projects might have been studied, but this was not possible due to time limitations. Also, given the small number of projects studied, it was not possible to choose a statistically representative sample of the overall body of Corning projects; however, the 19 projects are thought to be representative in the sense that they cover all five major market areas in which Corning is active.

A review of the final answers from all the interviews showed a detectable difference in SE input, as the 19 projects ranged in percent score from 41% to 92% in terms of their use of SE (Figure 2). The breakout of score by SE category is shown in the figure as well; each project could earn a maximum of 25% in each category, as illustrated by Project 1, which earned 25% in all categories but “requirements engineering”, where it earned 17%. Even in a situation in which many interviewees did not use SE terminology to describe their approach to managing the project, they were found to be using SE to varying degrees. For example, some interviewees did not use the term “design for testability”, yet their documented actions reflected proactive thinking about how to plan and schedule testing to evaluate whether a requirement had been met, from the point in time that the requirement was introduced and onward. On the other hand, other interviewees had not clearly engaged in proactive thinking about design for testability, at least based on the project documentation they presented.

Figure 2. Comparison of percent of total possible points earned by numbered project, including contribution from SE category. (Notes for figure: “Trade” = technical analysis, “Verif” = verification & validation, “Requ” = requirements engineering, “Market” = market analysis. Stack order is the following: from top to bottom, Trade/Verif/Requ/Market).

The 19 projects are divided into higher, medium, or lower SE input projects, with higher and lower input projects having percentage scores either one standard deviation above or below the mean score across all projects. The mean was 59% and standard deviation 13%, so the breaks fell at scores of 72% and 46%. Thus two projects were found to have “low” SE input, three others “high” input, and the remaining 14 “medium” input. Among the four SE categories, “market analysis” was found to earn the highest number of points on average from the questions asked, followed by technical analysis, requirements engineering, and verification/validation.

4.1. Project Performance and Association with SE Input

Whenever possible, project performance was evaluated right after the completion of the interview, based on documentation of budget and schedule adherence and evidence of project success or failure in the marketplace presented during the interview. The majority of projects were evaluated this way, but in some instances the NPD process was early enough in its life cycle that it was necessary to wait for the project to unfold to evaluate its performance. All remaining projects were evaluated in early 2011 through follow-up contact with Corning. In post-interview debriefings between Corning and Cornell, there was found to be satisfactory agreement about the assessment of performance. For example, in cases where Corning SE was aware of problems having arisen in a project, information about these problems came out in the interview.

Of the 19 projects studied, 3 were found to have superior performance, 2 were found with performance shortfalls, and the remaining 14 had satisfactory performance. The findings from this stage of the research, although nuanced and limited by the number of projects studied, were in general supportive of the value of SE: superior projects generally had high SE input, and projects that fell short had shortcomings in one or more of the SE input areas, although not necessarily having markedly lower scores in SE input overall when compared to satisfactory or superior projects. The association between SE input and project performance can be illustrated using the mosaic diagram shown in Figure 3, where the fraction of the bar in each color is proportional to the number of projects within each category of SE input that has the specified performance. For example, the bar in the middle of the figure represents the 14 projects with a medium level of SE input. Within this bar, two projects fell short of performance expectations, one exceeded expectations, and the remaining 11 were satisfactory.

Based on the color-coding, the presence of superior projects increases and shortfall projects decreases as SE input increases from left to right across the figure, with no superior projects in the lower input category and no shortfall projects in the higher input category. In an ideal situation, there might be perfect correlation between SE input and performance: all shortfall projects in the lower input column, all satisfactory projects in the medium input column, and all superior projects in the high input column. Since product development is a highly complex process, however, it is not surprising that the correlation between SE

Figure 3. Project performance as a function of overall SE input, for lower, medium, and higher SE input projects. Note: number in parenthesis shows how many projects fell into the category bar, e.g., 2 projects in the “lower SE input” category, etc. See text.

input and performance is more mixed, with more than one color appearing in each column.

In terms of the use of budget or schedule adherence data to evaluate performance, the data from the projects sometimes revealed when a problem may have arisen, but did not in general provide an exact indication of project performance. In the first measure, comparing predicted-to-actual use of time and financial resources provides a possible way to detect whether a project may have encountered difficulties, which then resulted in cost overruns or delays. However, increases in expenditures or the lengthening of a schedule may instead reflect growing interest in or success of a new product. Without knowing the details of the individual project, one cannot make any definitive claims. Conversely, projects may adhere to schedule and budget and still under- or outperform expectations. Honour [10] comments that many projects “sit on a line” in terms of their budget adherence: the ratio of expected to actual budget of funds and schedule time is close to unity, and one cannot distinguish between projects in terms of delivering performance on this basis alone.

Cognizant of this limitation, we used other evidence of project performance (customer interest growing faster than expected; accelerated development and launch into the marketplace; evidence of high profitability or return on investment, evidence of design awards, or the like) in addition to budget/schedule adherence (in instances where some distinction could be drawn) to evaluate performance. Ideally, one would like to be able to evaluate performance quantitatively on some objective measure such as budget and return on investment, but this proved not to be possible in the case of Corning, so our performance measure is a qualitative evaluation of mixed objective and subjective factors.

4.2. Comparison of High and Low Performing Projects

The three high-performing projects can be divided between two that also had high scores for SE input (Project 1 with 92% and Project 3 with 77%), and a third that was in the medium SE input range (Project 6 with 58%). Projects 1 and 3 scored at either an average or very high level across all four SE categories. In no instance did they score poorly in any of the SE areas. In addition to making an outstanding contribution to Corning’s corporate objectives, the process observed within these projects was highly productive: interviewees described characteristics such as accelerated advance through the stage gate process, or winning of internal company awards for project excellence. Not all projects with high SE input in Figure 3 also had high performance: Project 2, with an SE input score of 82%, was judged to have satisfactory performance.

The overall SE input score of the third high-performing project (Project 6) was equal to the average across all projects of 58%. It achieved high scores in market analysis and average scores in the other three areas. The project developed a product in a line of products in a market in which Corning has had a presence for many years, and was able to return outstanding results compared to a number of similar products launched in the past. This situation suggests that SE input leads to project excellence some of the time, but it is not a perfect predictor of outstanding performance. In NPD, project success is never a mechanical exercise in following a well-trodden path toward a guaranteed outcome, and it is to be expected that at times other factors besides SE input lead to success.

Turning to the two projects that fell short, the common feature was not a failure across the board to score adequately in all four SE areas. Rather, what emerges are critical failures in one or more areas, even though the project may have achieved adequate and even above average SE input scores in the others. In the first of the two projects in this area (Project 7), the critical problem was in translating the value proposition and requirements for a successful product into verification and validation steps that could ascertain that the product met its goals. In particular, there was evidence of testing presented during the interview, but none of it conformed to a systems engineering approach to testing, and the project earned 0 out of 3 possible points for verification and validation. Schedule and budget adherence data viewed during the interview, along with the interviewees own description of its development trajectory, corroborated the ensuing difficulties: both budget and schedule deviated from expectations as the project struggled to rectify problems with the product. Eventually the product reached full maturity in the marketplace, and, although technical expectations from customers were eventually met, the product fell short of Corning’s revenue expectations. Thus the product’s path to market fit with the dangers described by Meyer and Lehnerd [16] regarding failure to develop rigorously and in a timely fashion: the product arrives both late and at too high a cost into the market, and cannot overcome the financial handicap that has been created. It is reasonable to see causation here: a more rigorous approach to testing might have resulted in a more successful project.

The second shortfall project (Project 8) fell slightly below the average at 57% overall SE input score, but had a particularly low market analysis score. Project 10 earned only 2 out of 4 possible points earned, compared to the average project earning 3 out of 4 market analysis points, and a number of projects, including “excellent” Projects 1 and 3, earning 4 out of 4 available points. Project 8 was deficient in documenting a clear and detailed analysis of its proposed market, segmentation of the market, and the niche within the market where Corning’s product might succeed. Instead, the market analysis resulted in a broad but simplistic statement about a potentially very large market, but within which ultimately a clear value proposition failed to emerge; as such, it garnered relatively few points in the SE input scoring framework. Lacking a focused market vision, the project remained for many years in an early development stage, ultimately resulting in a shortfall of expectations. Analysis outside the interview revealed that this occurred in a product market where competitors were eventually able to market successful products in clearly defined niches. Thus, not only did the lack of solid market analysis hamper the project, but it also ended up preventing application of the principle of cutting off unpromising projects in a timely fashion. Interestingly, this project fared better (though not outstandingly) in its requirements score, and had strong V/V and technical analysis scores. However, having failed to understand the market, these other areas were moot. With the project having taken the wrong direction at the outset, other SE techniques are not able to reverse the outcome.

5. Content Analysis of Interviewee Quotes

Along with scoring of answers to individual questions posed in the interviews, we have retained a number of quotes from interviewees that shed further light on the answers given, beyond what can be captured in scoring a response to a question. For this part of the research, we did not have available a word-for- word transcript of the interview. However, we believe that the quotes as given are accurate enough to represent the intended meaning of the interviewee.

To analyze the overall meaning of the 16 collected quotes, we conducted an affinity grouping exercise. Three categories of quotes emerged from this exercise:

1) Quotes that supported the benefits of using SE (3 quotes).

2) Quotes that illustrate the challenges with using SE generally (5 quotes).

3) Quotes that illustrate the challenges with using SE in the specific context of Corning research and development (8 quotes).

In the remainder of this section, a sampling of highlight quotes are presented and discussed.

5.1. Quotes That Supported the Benefits of Using SE

In general, these quotes gave further details about how using SE made a difference in project performance.

“I came onto the project in midstream as a newly added systems engineer. When I started, I found the approach to testing to be unfocused and responded by introducing ‘design for testability’: A general test description would appear as soon as requirements were set out. I considered bringing focus to the testing process to be the job of the systems engineer. Technical people responsible for testing responded positively to the change: they could see its appeal right away.”

This quote speaks to SE at work in making projects perform better. As illustrated in this case, SE is not always present in a project from its inception. Here, the interviewee joins the project with the express purpose of bringing SE to it, sees that one area of weakness is verification and validation, and introduces the concept of “design for testability” with immediate positive results.

“The motivation for the project was based on an early value analysis, which showed potential value to customer, and value to Corning. But the early value analysis was a projection only―when, in the final more realistic value analysis, the actual value of the product was negative, management decided to shelve the project.”

Like the previous quote, the interviewee shows how introducing SE can bring rigor to the market analysis and help the project management and its stakeholders to make the sometimes tough decision to terminate a project, consistent with the PDMA philosophy mentioned above. Although not discussed in the quote, the interviewee went on to discuss how some within the project resisted this decision in the hopes that the project might continue and eventually find a market. However, this situation only serves to illustrate how, without a strong connection to market analysis, members of a project team can become invested in the unjustified continuation of a project because of the work they have already put into it.

5.2. Quotes That Illustrate the Challenges with Using SE Generally

Some of the quotes illustrate the challenges with implementing SE techniques in general. These challenges are thought to be applicable across all types of organizations that might implement SE.

Q: “Given that the market analysis outcome for the product was that the market was not large enough, wouldn’t it have been better to wait for market analysis results before continuing development?” A: “You have to pursue market analysis and development simultaneously. If you wait for the market analysis answer to come back before starting development, you’re too late.”

This quote shows the challenges that the product development team faces in deciding how to allocate resources. A product needs a solid market analysis in order to know whether or not it is viable to go forward with development. On the other hand, timing is important as well: if the market analysis is positive, the team needs to be ready to move forward as rapidly as possible in order to launch the product in a timely fashion, given the competitive nature of the marketplace. Note that not all projects studied involved products with high pressure to launch quickly; some projects enjoyed more leeway to take as much time as necessary to find a solution that works well.

“We created a plan for systems level acceptance testing (SLAT) but did not follow through… SLAT won’t happen because product engineering resource has been sucked into other activities… resources are always being taken away for customer purposes.”

This quote illustrates the quintessential “catch-22” for systems engineering: if resources were made available to carry out the testing, the outcomes would likely rectify the problems, but because SE is not a priority for the stakeholders, the resources are diverted elsewhere, and the problems in the project continue to fester.

5.3. Quotes That Illustrate the Challenges with Using SE in the Specific Case of Corning

Along with quotes that illustrated challenges for SE that transcend the specific organization, some of the challenges encountered were specific to the characteristics of Corning. These characteristics include the focus on developing keystone components for a diverse range of applications (from optical electronics to life sciences), the focus on materials science as a key competency, and the strategic emphasis on developing breakthrough technologies as opposed to incremental improvements.

“The project was budgeted to experiments, not to deliverables. It’s all learning, which is different from meeting statements of deliverables.”

This quote shows the difficulty of using adherence to budget as a measure of project performance in some situations. Sometimes it may be the right thing to do to spend more on a project than was budgeted. This decision would be justified to thoroughly learn about some aspect of a project, as a necessary foundation for eventual product success. When projects overrun their budgets in this situation, it may be difficult to tease out how much was due to the need for learning, and how much was due to mis-execution that might have cost less if it had been done differently.

The following two quotes convey a similar meaning and are presented together:

“The moment you try to lock research scientists into a rigid timetable of test schedules and deadlines, they start running for the exit.” “How do you plan and schedule testing when you don’t even know enough about the topic to know what it is you are going to test?”

These quotes show the challenges associated with coming up with test plans and schedules in a research-oriented environment. Looking back across all the interviews, it is clear that, in some situations, there simply is not enough known to apply design for testability at the outset. Furthermore, attempts to force a plan and schedule onto research scientists may indeed stifle innovation. At the same time, other projects among the 19 studied earned high marks in the verification and validation area. Perhaps a reasonable compromise is to always consider design for testability, and implement it where possible.

6. Discussion

In the preceding three sections, we have presented an interview methodology and shown how its application within Corning yielded evidence for the effect of SE on product development. Several observations arose from this work, as follows:

1) The progression from market analysis to requirements to V/V is critical: Projects 6 and 8 had weaker than average scores in one or more of these areas; they also experienced critical failures that led to project shortfall. Technical analysis, while helpful, was not found to be part of this list of critical steps. During the interviews for many projects, we encountered documentation of carefully applied and well-documented trade studies (Kepner-Trigo, or K-T, Analysis being commonly used). Such studies were used, for example, to decide which outside enterprise to collaborate with, or which materials solution to adopt among several competing choices. These decisions, however, did not appear to contribute fundamentally to the success or failure of a project in any interview that we conducted. In summary, the message was that technical analysis is valued and used within the organization, but the areas of market, requirements, and V/V have a closer relationship with project success.

2) The approach can be adapted to meet the needs of other firms: We have presented a framework for how to create the interview research process, and not an immutable set of questions that must be asked. In the case of a firm like Corning Incorporated, which applies materials science expertise to providing keystone component solutions for its customers, the value comes from systematically evaluating exploratory materials science concepts to identify the most promising ones. Once these concepts are identified, SE ensures that they stay focused on the value proposition so that they succeed in the market. Another enterprise might pursue this research internally and have latitude to choose a different set of SE categories from the original list of eight, to suit their needs. They might also choose to study all eight, although with limited time and resources, it may be difficult to study each in sufficient detail to yield meaningful results. Having chosen categories, the enterprise is also free to create their own questions to be asked, rather than using those that we provide in this paper.

3) Repeat application in other firms will yield both individual and collective benefits: first, it will help other firms justify to themselves the value of using SE techniques in product development. It will also help them to tailor their own internal use of SE to their own needs, as the research will uncover which techniques have the most effect on project performance. Lastly, a body of research built around these interviews will help make the case for the benefit of systems engineering across industry in various sectors.

7. Conclusions

This paper reports on an interview research methodology for measuring the effectiveness of SE techniques in improving project performance in new product development, and its trial application at Corning Incorporated. The fundamental steps in the process include 1) choosing from a menu of options areas of SE techniques to evaluate, 2) creating from the list of techniques a set of SE input questions to discuss in the interview setting, 3) carrying out “interviews with documentation” to evaluate SE input and project performance, and 4) comparing projects studied in the post-interview stage to evaluate the relationship between SE input and project performance. We conclude that, based on the interviews included in this study, 1) the methodology is effective in gathering accurate and candid information about projects, and can be adapted to other firms and other product sectors or business strategies, and 2) there was a relationship between increased SE content and improved project performance in the case of the 19 Corning projects reviewed, advancing the claim that enhancement of SE capability benefits NPD. The methodology can be adapted to other commercial firms so that they can study their use of SE techniques, and building a stronger quantitative, empirical case for SE through repeated application across multiple firms. To summarize, the main contribution of our research is that we were able to measure systematically across multiple projects the positive impact of SE input on performance, which to the best of our knowledge has not been done previously in the commercial NPD space.

Acknowledgements

The authors of the report wish to thank Professors Al George, Linda Nozick, and Frank Wayno of Cornell University for their input into the project, and the Corning Foundation and the Cornell University Systems Engineering Program for financial support for the project. While this support is gratefully acknowledged, the findings do not represent official opinion of either Corning Incorporated or Cornell University, and responsibility for any and all errors rests with the authors.

Appendix

Scores for each of the projects with regard to each of the questions:

Table A1. Scores for each of the projects.

Explanation: “++” means the project earned full score, or one point, for the question. “+” means it earned half score, or half a point. “0” means no points were earned. Code to questions: In each of the 14 SE Input questions, documentation was requested on the following topics: Market segment: Q1) Market analysis, Q2) Customer analysis, Q3) Competitor analysis, Q4) Value proposition for product; Requirements Engineering: Q5) Technical Performance Measures (TPMs) in use, Q6) TPMs tied to value proposition, Q7) TPMs tied to schedule; verification & validation: Q8) Testing tied to requirements, Q9) Testing tied to test plan, Q10) Test schedule adherence; Q11) Use of tradeoff analysis, Q12) Tradeoff analysis tied to requirements, Q13) Background research for tradeoff analysis, Q14) Stakeholder interaction regarding tradeoff analysis.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Vanek, F., Jackson, P., and Grzybowski, R. (2008) Systems Engineering Metrics and Applications in Product Development: A Critical Literature Review and Agenda for Further Research. Systems Engineering, 11, 107-124.
https://doi.org/10.1002/sys.20089
[2] Corning Systems Engineering Directorate (2009) Corning-Cornell Project to Evaluate Effectiveness of Systems Engineering Techniques: Final Report from a Literature Review and Interview Research. Corning Incorporated, Corning.
http://www.lightlink.com/francis/CorningReport2009.pdf
[3] Vanek, F., Grzybowski, R. and Jackson, P. (2009) Quantifying the Benefits of Systems Engineering in a Commercial Product Setting. Insight, 12, 37-38.
https://doi.org/10.1002/inst.200912137
[4] Corson, B. (2009) Designing Day One: Pre-Expert Strategies for Improving Collaborative Design Processes. Presentation to Cornell University 2009 Systems Engineering Day, Ithaca.
[5] Pinto, J. and Mantel, S. (1990) The Cause of Project Failure. IEEE Transactions on Engineering Management, 37, 269-276.
https://doi.org/10.1109/17.62322
[6] Sheard, S. (2000) Three Types of Systems Engineering Implementations. Proceedings of International Council on Systems Engineering Conference, Coventry, March 2000, 1-10.
[7] Loureiro, G., Leaney, P. and Hodgson, M. (2004) A Systems Engineering Framework for Integrated Automotive Development. Systems Engineering, 7, 153-166.
https://doi.org/10.1002/sys.20001
[8] Staley, J. and Warfield, J. (2007) Enterprise Integration of Product Development Data: Systems Science in Action. Enterprise Information Systems, 1, 269-285.
https://doi.org/10.1080/17517570701507685
[9] Frantz, W.F. (1995) The Impact of Systems Engineering on Quality and Schedule: Empirical Evidence. Proceedings of 1995 International Council on Systems Engineering Conference, Wroclaw, 1-7.
[10] Honour, E. (2004) Understanding the Value of Systems Engineering. Proceedings of the 14th Annual International Council on Systems Engineering Symposium, Toulouse, 1-16.
[11] National Defense Industrial Association (2007) A Survey of Systems Engineering Effectiveness—Initial Results. NDIA, Pittsburgh.
[12] Honour, E. and Valerdi, R. (2006) Advancing an Ontology for Systems Engineering to Allow Consistent Measurement. Proceedings of Conference on Systems Engineering Research, Los Angeles, 6-9 April 2006, 1-12.
[13] International Council on Systems Engineering (2006) Systems Engineering Handbook: A Guide for Systems Life Cycle Processes and Activities. INCOSE, Seattle.
[14] Roedler, G and Rhodes, D. (2007) Systems Engineering Leading Indicators Guide. INCOSE, Seattle.
[15] Product Development Management Association (2002) The PDMA Toolbook for New Product Development. John Wiley & Sons, New York.
[16] Meyer, M. and Lehnerd, A. (1997) The Power of Product Platforms: Building Value and Cost Leadership. Free Press, New York.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.