New Product Data Quality

Transcription

New Product Data Quality
Eindhoven, October 2010
New Product Data Quality
By
Maarten Kluitman
Bachelor of Mechanical Engineering – Fontys Hogescholen 2005
Student identity number 0620779
in partial fulfillment of the requirements for the degree of
Master of Science
in Innovation Management
Supervisors:
Prof.dr. G.M. (Geert) Duijsters, TU/e, ITEM
Prof.dr. R.J. (Ronald) Mahieu, TU/e, ITEM
Ir. J. (Jac) Goorden, Bicore
TUE. Department Technology Management.
Series Master Theses Innovation Management.
Subject headings: new product development, innovation management, portfolio
management, new product data quality, research and development.
Summary
In the literature innovation is identified as a very important strategic challenge. Open
innovation has increased the need to master the management of innovation, since
innovation complexity is increased by the introduction of a greater interdependency among
firms. Moreover, open innovation brings greater interface demands and a greater
organizational ability to absorb external information and assess the impressions from the
outside. It is essential for companies to use the right tools to help manage the complexities
which are introduced by these additional demands. Innovation management tools play an
important role in facilitating an open innovation strategy. This study is focused on one
particular innovation management tool, R&D portfolio management.
R&D portfolio management is concerned with the analysis of portfolios containing a set of
R&D projects, technology, and new products efforts currently funded and underway. The
overall goal of this business process is to link the corporate strategy and the management of
the aforementioned innovation efforts. Furthermore, R&D portfolio management is a
dynamic decision process, whereby projects are evaluated, selected, and prioritized in order
to reach the following potentially conflicting goals: strategic alignment, maximization of
value, and portfolio balance. This way it provides a coherent basis on which to judge which
projects should be undertaken. In order to make such project selection and/or ranking
decisions businesses use a combination of different portfolio methods. These include,
among others, financial methods, strategic approaches, bubble diagrams, scoring models,
and checklists. All these portfolio methods require information which is commonly part of
the business case of a R&D project. More specifically, the results from activities such as
market, technical and financial analysis serve as input for portfolio analyses. In this study the
term ‘new product data’ is used to depict the data collected for portfolio analyses. However,
although this information is generally available it does not mean that R&D portfolio
management is a routine decision making area. Among other reasons, this is because it deals
with future events and opportunities, so a great deal of the required information is highly
uncertain. Moreover, all projects compete against each other for resources while there are big
differences in the quality of the data. Furthermore, the decision environment is very dynamic.
These characteristics are closely linked to the notion of data quality. New product data
quality is an important factor that hinders the implementation of R&D portfolio
management, and slows down perceived and actual benefit gains from this decision making
process.
In this study data quality is defined as data that are fit for use by data consumers. Data
quality consists of twenty data quality dimensions which are sorted in the following four
categories: (1) intrinsic data quality, (2) contextual data quality, (3) representational data
quality, and (4) accessibility data quality. This study only focuses on intrinsic data quality,
hence the other three categories are beyond the scope of this research project.
i
Intrinsic data quality denotes that data has quality in its own right. This means that this
category entails intrinsic data properties, which are properties that the data has of itself,
independently of other things, including its context. Intrinsic data quality consists of the
following data quality dimensions: (1) believability, (2) accuracy, (3) objectivity, and (4)
reputation. For each of these dimensions several factors are identified which can be used to
assess or improve new product data quality.
Well-designed forecasts can help managers to understand future risks so they can make
better business plans that inform ongoing R&D portfolio decision making. Thus, competence
in forecasting can improve new product data quality and thereby hurdle the implementation
problem of R&D portfolio management. Even more importantly, better R&D portfolio
management decisions are made in case of high quality new product data. In the long term
new product data quality is key to the company’s return on innovation investments.
This research project identified the following approaches that businesses can take to
improve new product data quality:







Formalize the new product forecasting process. By developing and implementing a
systematic, explicit new product forecasting process R&D portfolio information and
decision quality can be enhanced.
Decompose the new product estimation problem. The new product estimation
problem should be decomposed into sub-problems that can be more easily or
confidently estimated, and then aggregating these sub-problems based on sound
rules.
Require justification of forecasts. New product forecasting should be viewed as a
process of assumption management, which involves the generation, translation, and
tracking of assumptions and their metrics.
Perform a sanity check. A relatively quick way to improve new product data quality is
by performing a sanity check, a basic test to quickly evaluate whether data can
possibly be true.
Measure and track forecast performance. Measuring and analyzing forecast
performance has a positive effect on new product data quality. It provides the
opportunity to identify whether changes in the development and application of
forecasts are contributing, or hindering, business success.
Take an “outside view”. By using all the distributional information that is available
from comparable past initiatives (i.e. taking an “outside view”) new product data
objectivity can be enhanced. Reference class forecasting is a method for
systematically taking an outside view on forecasting problems.
Combine multiple independent forecasts. New product forecasting accuracy and
objectivity can be substantially improved through the combination of multiple
independent forecasts. Independent forecasts can be generated in the following
ii




three ways: (1) by analyzing different data, (2) by using different forecasting
methods, or (3) by using independent forecasters. Furthermore, when sufficient
resources are available, it is best to combine forecasts from at least five methods.
Use a cross-functional forecasting approach. New product data quality can be
improved by involving all key stakeholders/experts from different departments in
new product forecasting.
Focus on most influential data. It is advised to first focus on improving new product
data which is of greatest concern in terms of data quality and impact on the business
case.
Indicate data quality. By explicitly indicating new product data quality R&D portfolio
information and decision quality can be enhanced. This is because projects with
different data quality can be compared more fairly.
Improve the quality of execution of the predevelopment activities. Investing
sufficient time and resources in the predevelopment activities (e.g. preliminary
market assessment, preliminary technical assessment, market study, business
analysis, and financial analysis) is of critical importance.
iii
Preface
This Master thesis is my final work to complete the MSc program of Innovation Management
at the Eindhoven University of Technology. This project is executed in cooperation with
Bicore, a business service provider specialized in the topic of innovation management,
including R&D portfolio management.
First of all I would like to thank my two university supervisors Prof. dr. Geert Duijsters and
Prof. dr. Ronald Mahieu, both from the Innovation, Technology, Entrepreneurship and
Marketing group within the IE & IS department of Eindhoven University of Technology.
Especially Prof. dr. Geert Duijsters has supported and advised me through many iteration
and feedback sessions. With his help this thesis has gained a much more solid and scientific
base. Later on in the process also Prof. dr. Ronald Mahieu has provided me with valuable
input to improve this thesis.
I would also like to thank my company supervisor Jac Goorden for his continual support. Jac
has introduced me to the topic of R&D portfolio management and provided me a very
interesting research opportunity on the subject of new product data quality. During the
entire process Jac was always more than happy to advise me on a wide range of subjects. I
hope that the results from this study will be useful for Bicore to further improve its R&D
portfolio management support software, called Flightmap.
Special thanks go to Arjan de Keijzer. During the four years at the Eindhoven University of
Technology Arjan and I have worked together on many projects and assignments. Not only
did he proof to be a reliable and valuable team member, he was also a great guy to spend
numerous coffee and lunch breaks with.
Finally, I would like to thank my parents Henri and Helma. Throughout the many years that I
have worked on my educational development they have continually supported me, morally
and financially. The fact that I have come so far to finish this work and receive the Master of
Science title can be attributed for a large part to them.
Maarten Kluitman
October 2010
iv
Table of Contents
Summary ..................................................................................................................................... i
Preface....................................................................................................................................... iv
Table of Contents ....................................................................................................................... v
1.
Literature Review .............................................................................................................. 1
1.1
Open Innovation and Innovation Management Tools ............................................... 1
1.2
New Product Development and Portfolio Management ........................................... 3
1.2.1
Business and Product Strategy ............................................................................ 3
1.2.2
Roadmapping ...................................................................................................... 4
1.2.3
Product and Solution Development .................................................................... 4
1.2.4
Life Cycle Management ....................................................................................... 7
1.2.5
Portfolio Management ........................................................................................ 7
1.2.6
Conclusion ......................................................................................................... 13
1.3
New Product Data Quality ........................................................................................ 15
1.3.1
New Product Data ............................................................................................. 15
1.3.2
New Product Data Quality ................................................................................. 16
1.3.3
Assessing and Improving New Product Data Quality ........................................ 21
1.3.4
2.
3.
Conceptual Model ....................................................................................................... 30
Methodology ................................................................................................................... 31
3.1
Research Setting ....................................................................................................... 31
3.1.1
Introduction ....................................................................................................... 31
3.1.2
Sample ............................................................................................................... 31
3.2
4.
Conclusion ............................................................................................................. 28
Research method ...................................................................................................... 31
3.2.1
Introduction ....................................................................................................... 31
3.2.2
Survey ................................................................................................................ 32
3.3
Interview Development ............................................................................................ 34
3.4
Data Analysis ............................................................................................................. 36
Results ............................................................................................................................. 37
4.1
Confirmation of Research Design ............................................................................. 37
4.2
R&D Portfolio Management ..................................................................................... 38
4.3
New Product Forecasting .......................................................................................... 39
v
4.4
5.
Factors Influencing New Product Data Quality ......................................................... 40
Discussion ........................................................................................................................ 44
5.1
Conclusions ............................................................................................................... 44
5.1.1
Conceptual Model ............................................................................................. 44
5.1.2
Assessing versus Improving Data Quality .......................................................... 45
5.1.3
Factor Relations ................................................................................................. 47
5.1.4
R&D Portfolio Management .............................................................................. 48
5.2
Managerial Implications ........................................................................................... 49
5.2.1
R&D Portfolio Management .............................................................................. 49
5.2.2
New Product Data Quality ................................................................................. 50
5.3
Academic Implications & Further Research ............................................................. 52
5.4
Limitations ................................................................................................................ 52
References ................................................................................................................................ 54
Appendix A: Definitions of Data Quality Dimensions .............................................................. 63
Appendix B: Types of new products ......................................................................................... 64
Appendix C: Company Description ........................................................................................... 65
Appendix D: Research Sample Descriptions ............................................................................ 66
Appendix E: Interview Questionnaire ...................................................................................... 67
Appendix F: Confrontation Matrix ........................................................................................... 75
Appendix G: Out-Degree Centrality ......................................................................................... 77
vi
1.
Literature Review
1.1
Open Innovation and Innovation Management Tools
In 2003 Chesbrough argued that a paradigm shift was taking place in how companies
commercialize industrial knowledge. Within the closed innovation paradigm, the process
leading to innovation is completely controlled; all the intellectual property is developed
internally and kept within the company frontiers until the new product is
commercialized. Closed innovation, or internal R&D, was at its peak between the end of
World War II and the mid-eighties when many R&D departments of private companies were
at the leading edge of scientific research. In addition, this involvement into internal R&D was
perceived as a strong barrier to entry, because a large upfront investment in R&D was
required. (Presans, 2009) However, recently this paradigm has been challenged by several
factors (Chesbrough, 2003). The first factor is the increasing availability and mobility of
skilled workers. In the last 40 years, changing
labor markets, globalization, and industrial
restructuring have greatly influenced the size
and composition of the labor force (Mather
and Lee, 2008). The growth of the venture
capital (VC) market is the second factor.
Since 1980 there has been an enormous
expansion of VC, a type of private equity
capital typically provided for early-stage,
high-potential, growth companies. Other
factors are: the increasing costs and Figure 1: the open innovation paradigm
complexity of R&D, the shortening of the for managing industrial R&D. Adapted
technology life cycles, the increasing from Chesbrough (2003).
capability of external suppliers, and the
growing diffusion of leading-edge knowledge in universities and research labs around the
world. As a result of these ongoing trends most of the new knowledge emerges outside the
firm’s boundaries and a closed innovation approach is likely to overlook the business
opportunities from this large pool of external knowledge, while it cannot prevent internally
built knowledge from leaking out as entrepreneurial employees leave the company and start
their own business with venture capital financing (Chesbrough, Vanhaverbeke and West,
2006). Thus, in this case closed innovation is no longer sustainable and companies should
embrace open innovation. Open innovation is defined as: “the use of purposive inflows and
outflows of knowledge to accelerate internal innovation, and expand the markets for
external use of innovation, respectively. Open innovation is a paradigm that assumes that
firms can and should use external ideas as well as internal ideas, and internal and external
paths to market, as the firms look to advance their technology” (Chesbrough et al., 2006,
p. 1). The open innovation concept is visualized in figure 1.
1
Open innovation can improve the effectiveness and efficiency of innovation strategies, and
thus improves the innovative capabilities of organizations (Faems, van Looy, and Debackere,
2005). The first reason for this is that interorganizational cooperation might provide access
to complementary assets needed to turn innovation projects into a commercial success
(Hagedoorn, 1993). Secondly, working together with other organizations might encourage
the transfer of codified and tactic knowledge which in turn might result in the creation and
development of resources that would otherwise be difficult to mobilize and to develop
(Faems et al., 2005). Finally, there are cost-economizing incentives for open innovation,
which relate to the sharing of R&D costs (Hagedoorn, 2002).
In the literature innovation is identified as a very important strategic challenge. Open
innovation has increased the need to master the management of innovation, since
innovation complexity is increased by the introduction of a greater interdependency among
firms (Huston and Sakkab, 2007). Moreover, open innovation brings greater interface
demands and a greater organizational ability to absorb external information and assess the
impressions from the outside. According to Fredberg, Elmquist and Ollila (2008) it is essential
for companies to use the right tools to help manage the complexities which are introduced
by these additional demands. Innovation management tools can be defined as “the range of
tools, techniques, and methodologies intended to support the process of innovation and
help companies to meet new market challenges in a systematic way” (Igartua, Garrigós and
Hervas-Oliver, 2010). Examples of innovation management tools include: portfolio
management, roadmapping, and scenario techniques. Findings from the study by Igartua et.
al (2010) support that innovation management tools play an important role in facilitating an
open innovation strategy. More specifically, these tools help in “building and improving the
network, aligning network members to shared goals, improving the quality of the projects
defined, and establishing more efficient funding systems”.
This study will focus on one particular innovation management tool, R&D portfolio
management. The second chapter of this literature review will examine new product
development and portfolio management. Here, the overall framework for new product
development which includes the most important business processes related to portfolio
management will be explained. The third chapter will deal with new product data quality,
the main subject of this research project. New product data quality refers to the quality of
the data collected for portfolio analyses which includes financial data, marketing data, and
R&D data.
2
1.2
New Product Development and Portfolio Management
New product development is a potential source of competitive advantage and is among the
essential processes for success, survival, and renewal of organizations (Brown and
Eisenhardt, 1995). Krishnan and Ulrich (2001) define product development as “the
transformation of a market opportunity and a set of assumptions about product technology
into a product available for sale” (p. 1). It should be noted that most literature on new
product development adopts a broad definition for product. This means that the term can
describe any company offering, including services and ideas (Kahn, 2001).
In order to successfully commercialize new products a set of business processes needs to be
in place. The overall framework for new product development which includes the most
important business processes which are related to portfolio management is shown in
figure 2. As will be more thoroughly discussed in 1.2.1 and 1.2.2, strategic planning and
roadmapping involves determining the long-term goals and objectives of a business
organization. Moreover, it involves developing the courses of action and allocating the
necessary resources for carrying out these goals. It is of paramount importance that the
business’s strategy is aligned with the business’s innovation efforts. This strategy-innovation
alignment is difficult since strategic plans usually cover three to five years while innovation
is such a turbulent process that targets are likely to change. Therefore a distinct
management process needs to link the business and product strategy to the management of
R&D projects, technology, and other innovation efforts currently funded and underway. This
management process is referred to as portfolio management.
In the following sections each part of this framework will be briefly described in light of
portfolio management. Paragraph 1.2.6 will thoroughly describe all aspects of portfolio
management. Finally, this chapter will be summarized in section 1.2.7.
Figure 2: Full new product development framework. Based on O’Connor (2005).
1.2.1
Business and Product Strategy
Business strategy can be defined as “the determination of the basic long-term goals and
objectives of an enterprise, and the adoption of courses and actions and the allocation of
resources necessary for carrying out these goals” (Chandler, 1969; p. 13). The business
strategy gives direction to the entire company and it serves as the foundation for a product
strategy (McGrath, 2001). Product strategy is a management process which focuses on
3
existing products and new product introductions within a strategic framework. This
framework consists of the following six structural elements: (1) strategic vision, (2) platform
strategy, (3) product line strategy, (4) expansion strategy, (5) innovation strategy, and (6)
strategic balance and portfolio management (McGrath, 2001). Strategic balance and
portfolio management is all about making choices, setting priorities, and allocating resources
in order to align products under development with business and product strategy priorities
(McGrath, 2001). Moreover, strategic balance and portfolio management provides direction
to platform strategy, product line strategy, expansion strategy, and innovation strategy.
1.2.2
Roadmapping
Roadmapping or technology roadmapping is a “needs-driven technology planning process to
help identify, select, and develop technology alternatives to satisfy a set of product needs”
(Garcia and Bray, 1997; p. 12). In other words, it is a technique for supporting technology
management and planning with the purpose of making better technology investment
decisions (Garcia and Bray, 1997; Phaal, Farrukh and Probert, 2004).
The document that is generated by this
process is a technology roadmap. This
roadmap is a representation of future
products and/or technologies versus time
and includes projects in the current
portfolio, as well as unfunded efforts
planned for the future (Patterson, 2005).
Typically, technology roadmaps integrate
commercial, and technological knowledge
in the business (Phaal et al., 2004). It can Figure 3: Schematic technology roadmap.
be seen as a visualization of strategy or Adapted from Phaal et al. (2004)
strategic elements (Whalen, 2007). A
schematic technology roadmap is displayed in figure 3. This example shows how technology
can be aligned to product developments, business strategy, and market opportunities (Phaal
et al., 2004).
There is a clear link between roadmaps and portfolio management. First of all, portfolio
management requires roadmaps as input. The inclusion of roadmaps in the portfolio
management reviews adds great value in the assessment of strategic balance of the portfolio
which results in better informed portfolio decisions (Whalen, 2007). Secondly, the
communication and understanding of the impact of portfolio management decisions can be
improved by updating the strategy and associated roadmaps (Whalen, 2007).
1.2.3
Product and Solution Development
Business processes used to develop product and solutions are called new product
development processes. These processes cover all steps from idea to product launch and are
seen as being necessary to effective NPD and key to success (Cooper and Edgett, 1986;
Griffin, 1997). The term NPD process should not be confused with the overall framework for
4
NPD. The NPD process (i.e. the product and solution development process) is one of the
business processes that needs to be in place to successfully commercialize new products and
is thus part of the overall framework for NPD.
The majority of organizations use a formal process for conducting new product development
(Adams-Bigelow, 2003; Griffin, 1997; Hustad, 1996). Research by Cooper (1990) has
demonstrated that formal NPD processes improve the probability of product development
success. These processes are composed of multiple stages and gates. Typically, each stage
consists of a set of prescribed, related and parallel activities, undertaken by a crossfunctional team (Cooper, 1990). Succeeding each stage is a gate which is characterized by
the following aspects: (1) deliverables, (2) criteria, and (3) decisions (Schmidt, Sarangee and
Montoya, 2009; Cooper, Edgett and Kleinschmidt, 2000). An example of such a systematic
new product framework is shown in figure 4. This example shows a new product
development process with five stages and five gates, typically used for major new product
projects. Processes with fewer gates are usually used for smaller projects (line extensions,
modifications, improvements or marketing requests), since these projects involve less risk
(Cooper, 2006).
Figure 4: A five-stage, five-gate process typically used for major new product projects.
Adapted from Cooper (2006).
The front end of this NPD process consists of the discovery of new and useful product ideas.
This stage is frequently called the ‘fuzzy front end’ and the activities involved are often
chaotic, unpredictable, and unstructured (Koen, 2005). At stage 1 a preliminary market,
technical, and business assessment is conducted. The objective is to determine the project’s
technical and marketplace merits. A thorough market, technical, and financial analysis takes
place at stage 2. The deliverable at the end of this stage is a business case which includes a
defined product, a business justification, and a detailed plan of action for the next stages.
The third stage or the development stage involves the development of the product and of
detailed test, marketing, and operation plans. Moreover, an updated financial analysis is
prepared, and legal/patent/copyright issues are resolved. Prior to full launch the entire
viability of the project is tested and the financial justification is obtained. Finally, during
5
stage 5 the new product is launched on to the market and the product is fully
commercialized.
At each gate the project is evaluated based on specified criteria. These criteria can help
reduce managerial uncertainty and can identify areas where additional attention and
resources are needed (Hart, Hultink, Tzokas and Commandeur, 2003). Findings from the
studies by Tzokas, Hultink, and Hart (2004), Hart et al. (2003), and Carbonell-Foulquié,
Munuera-Alemán and Rodríquez-Escudero (2004) show that companies use different criteria
at different NPD evaluation gates. While such criteria as strategic fit, technical feasibility,
intuition and market potential are stressed in the early screening gates of the NPD process, a
focus on product performance, quality, and staying within the development budget are
considered of paramount importance after the product has been developed. Based on the
evaluations at the gates decisions are made to continue, kill, or hold projects. (Cooper et. al,
2000)
Integration of the NPD Process and Portfolio Management
Cooper et al. (2000) discusses the integration of portfolio management tools into the new
product development process, based on the assumption that a gating or stage-gate NPD
process is in place. This integration is conceptually visualized in figure 5. According to Cooper
et al. (2000) there are two fundamentally different approaches concerning this integration:
the “gates dominate” approach and the “portfolio reviews dominate” approach.
In the “gates dominate” approach,
key decisions are made at the gates of
the new product development
process. This approach reviews
individual projects in-depth. At each
gate, two types of decisions are
made. Firstly, a pass-versus-kill
decision is made where projects are Figure 5: integration of portfolio management
evaluated against pre-defined criteria.
Secondly, a go-versus-hold decision is made which involves comparing the project under
discussion with all other projects. Moreover, the impact of the project on the total portfolio
is assessed. In addition to these decisions at the gates the portfolio of all projects is reviewed
once or twice a year. This so called portfolio review serves largely as a check that the gates
of the new product development process are working effectively. Decisions at the portfolio
review as well as decisions at the gates of the new product development process are driven
by the business strategy and the new product strategy.
In the “portfolio reviews dominate” approach, major decisions are made at the combined
portfolio/gate 2 meeting. This is a quarterly meeting where all projects at or beyond gate 2
of the new product development process are evaluated at once. By an iterative process
resources are allocated to projects and the portfolio is checked for balance and strategic
alignment. Contrary to the first approach, gates in this approach are used merely as check
6
points. Projects are checked at each gate to ensure that they are on course, on budget, and
remain valuable. Similar to the first approach, decisions at the portfolio review as well as
decisions at the gates of the new product development process are driven by the business
strategy and the new product strategy.
It can be concluded that the “gates dominate” approach is a more decentralized decision
making process and the “portfolio reviews dominate” approach a more centralized decision
making process.
1.2.4
Life Cycle Management
Life cycle management refers to the management of products after the product is launched
onto the market. Typically, a successful product on the market progresses through four
major stages: (1) introduction, (2) growth, (3) maturity, and (4) decline (Lamb, Hair and
McDaniel, 2008). These stages are referred to as product life cycle and represents the unit
sales curve for a product, extending from the time it is first placed on the market until it is
removed (Rink and Swan, 1979). The theoretical rationale behind the product life cycle
concept originates from the theory of diffusion and adoption of innovations as proposed by
Rogers in 1962 (Rink and Swan, 1979; Rogers, 2003). It should be noted that commercialized
products in the early stages of the product life cycle remain part of the R&D portfolio
(Patterson, 2005).
According to Ausura, Gill and Haines (2005) life cycle management includes a great number
of different and varied activities. One of these activities is particularly relevant to portfolio
management. To be more specific, this activity involves the reporting and analysis of product
performance and financials (Ausura, et al., 2005) and is also known as the post-launch
evaluation (Cooper, 1990). Here the success of the new product in the market is assessed
and compared to the original business case (Tzokas, et. al, 2004). The most frequently used
evaluation criteria at this evaluation gate are: sales in units, margin, sales growth, market
share, profit, customer acceptance, and customer satisfaction (Hart et al., 2003; Tzokas et
al., 2004). This information makes it possible to identify and track trends, abnormal business
situations, and out-of-boundary conditions (Ausura, et al., 2005).
It can be concluded that post-launch evaluations provide important information for portfolio
management about the actual performance of products in the early stages of the product
life cycle. Moreover, these reviews provide a learning experience for future cases (Ausura, et
al., 2005).
1.2.5
Portfolio Management
This section discusses several aspects of portfolio management. First, Markowitz’s modern
portfolio theory concerning financial assets is discussed. In paragraph 1.2.6.2 it is argued
that this theory can also be applied to portfolios of research and development (R&D)
projects. Subsections of the paragraph about R&D portfolio management deal with the
goals of portfolio management and the characteristics of this decision making area.
Furthermore, an overview of the most popular portfolio management methods is given.
7
1.2.5.1
Modern Portfolio Theory
Markowitz (1952, 1959) is the founder of modern portfolio theory (MPT). The subject is
concerned with the analysis of portfolios containing large number of financial assets.
According to Markowitz (1959) a good portfolio is “a balanced whole, providing the investor
with protections and opportunities with respect to a wide range of contingencies” (p. 3).
A common property of investment opportunities is that their actual returns might differ
from what has been expected; i.e. investment opportunities are risky (Maringer, 2005). This
notion of financial risk, defined by the (potential) deviation of the real outcome from the
expected outcome, includes not only a lower than expected outcome (downside risk) but
also that the actual return is higher than initially expected (upside risk).
Markowitz’s mean variance theory models an asset’s return with the normal distribution.
The expected value (mean) of the returns and their variance capture all the information
about the expected outcome and the likelihood and range of deviations from it (Maringer,
2005). Another important aspect when comparing financial investment opportunities and
combining them into portfolios, is the strength of correlations between asset’s returns. To
reduce risk it is necessary to avoid a portfolio whose financial assets are all highly positively
correlated. This is because positive and negative deviations of the real outcome from the
expected outcomes will partly offset each other. In the words of Markowitz (1952, p. 89):
“Not only does mean variance theory imply diversification, it implies the ‘right kind’ of
diversification for the ‘right reason’”.
The fundamental theorem of mean variance portfolio theory explains how to select a
portfolio with the highest possible return for a given amount of risk, or how to select a
portfolio with the lowest possible risk for a given return. From these two principles an
efficient frontier has been formulated from which a preferred portfolio can be chosen by an
investor (Elton and Gruber, 1997).
While Markowitz’s theory is concerned with the analysis of financial portfolios, it can also be
applied to portfolios of research and development (R&D) projects. Some of the simplest
elements of Modern Portfolio Theory are applicable to virtually any kind of portfolio. The
concept of capturing the risk tolerance of an investor by documenting how much risk is
acceptable for a given return could be and is applied to a variety of decision analysis
problems. The next paragraph will focus on R&D portfolio management.
1.2.5.2
R&D Portfolio Management
R&D portfolio management is concerned with the analysis of portfolios containing a set of
R&D projects, technology, and new product efforts currently funded and underway
(Patterson, 2005). As discussed in the introduction of this chapter, it is a management
process that links the corporate strategy and the management of a project pipeline (EIRMA,
2002). In other words, R&D portfolio management is aimed at selecting projects and
allocating resources in order to achieve the business’s new product and technology
objectives (Cooper, Edgett and Kleinschmidt, 1999; EIRMA, 2002). While the context of R&D
8
portfolio management is already visualized
in figure 2 (p. 3), a more hierarchical
Vision
overview is shown in figure 6.
Mission
Cooper et al. (1999) define portfolio
management as “a dynamic decision
Strategy
process, whereby a business’s list of active
new product (and R&D) projects is
Portfolio Management
constantly updated and revised. In this
process, new products are evaluated,
Project Pipeline
selected, and prioritized; existing projects
may be accelerated, killed or deprioritized;
Project Management and Review
and resources are allocated and reallocated
to the active projects. The portfolio decision Figure 6: the context of portfolio
process is characterized by uncertain and management. Adapted from EIRMA, 2002.
changing
information,
dynamic
opportunities, multiple goals and strategic considerations, interdependence among projects,
and multiple decision-makers and locations” (p. 335).
The concepts used in R&D portfolio management have their roots in financial portfolio
management; i.e. Markowitz’s mean variance theory. However, there are four differences.
Firstly, R&D projects typically aim to address a range of strategic, financial and marketing
goals, as opposed to a singular focus on financial return. Secondly, the assets of financial
portfolios are liquid and can be assessed or re-assessed at any point in time, while
opportunities for new projects may be limited and may appear in limited windows of time.
Furthermore, projects that have already been initiated cannot be abandoned without the
loss of the sunk costs – put another way there is little or no recovery/salvage value of a halfcomplete R&D project. Thus, R&D portfolio management deals with illiquid assets. Hence,
R&D projects are not as easily traded as financial investments. Thirdly, contrary to financial
assets R&D projects typically demand a complex and usually constrained mix of resources. In
other words, the number of projects in a R&D portfolio are limited by the resources available
from R&D, engineering, marketing, and operations. Fourthly, the assets in financial
portfolios are continuously divisible while portfolios of R&D projects are not. (Wikipedia,
2010)
Neither of these differences necessarily eliminate the possibility of using the concepts from
MPT in R&D portfolio management. They simply indicate the need to run the optimization
with an additional set of constraints that would not normally apply to financial portfolios
(Wikipedia, 2010). Findings from McGrath (2004) are in line with this conclusion. She states
that companies that apply techniques borrowed from financial investment portfolio
management to analyze R&D portfolios incur significant strategic benefits.
9
Goals of R&D Portfolio Management
The overall goal of R&D portfolio management is to provide a coherent basis on which to
judge which projects should be undertaken (Tidd, Bessant and Pavitt, 2005). Failure to make
such judgments can have a negative impact on new product development performance, as
table 1 indicates.
R&D portfolio management combines a variety of approaches to prioritize and select
projects (see p. 10-11), in order to reach three wide-ranging goals. According to Cooper et al.
(1999) these are: (1) strategic alignment, i.e., the alignment between the firm’s business
strategy and NPD efforts; (2) maximization of value, i.e., the optimal ratio between resource
input and return; and (3) balance, i.e., a harmonious portfolio with respect to specific
parameters (e.g., incremental versus radical innovation or risk versus reward characteristics).
These three goals are potentially conflicting. For example, the portfolio with the greatest
value may not be very well balanced.
Without
R&D
portfolio
management there may be…
No limit to projects taken on
Reluctance to kill-off or ‘de-select’
projects
Lack of strategic focus in project
mix
Weak or ambiguous selection
criteria
Weak decision criteria
Impacts
Resources spread too thinly
Resource starvation and impacts on time and cost - overruns
High failure rates, or success of unimportant projects and opportunity
cost against more important projects
Projects find their way into the mix because of politics or emotion or
other factors – downstream failure rates high and resource diversion
from other projects
Too many ‘average’ projects selected, little impact downstream
market
Table 1: What happens when there is no R&D portfolio management method.
Adapted from Tidd et al. (2005).
Characteristics of R&D Portfolio Management
The introduction of paragraph 1.2.5.2 already mentioned some important differences
between R&D projects and R&D portfolios with respect to financial assets and financial
portfolios. This sub-section will focus on specific characteristics that make R&D portfolio
management a challenging decision making area.
Cooper, Edgett and Kleinschmidt (2005) points out the following four unique features. First
of all, projects in the portfolio are at different stages of completion, yet all projects compete
against each other for resources. This implies that comparisons must be made between
projects with different data quality. This unique feature is of great importance to this study
since it highlights the importance of new product data quality. Secondly, the decision
environment is very dynamic. This means that the status and prospects for projects in the
portfolio are constantly changing as new information becomes available. Thirdly, R&D
portfolio management deals with future events and opportunities. As a result much of the
information required to make decisions is highly uncertain. Lastly, resources to be allocated
10
across projects are limited. A decision to fund one project may mean that resources must be
taken away from another and resource transfers between projects are not totally seamless.
R&D Portfolio Management Methods and Required Data
The best practices study conducted by Cooper, Edgett and Kleinschmidt (1998) revealed that
top performers, with regard to the performance of business’s portfolio, use a combination of
different
portfolio
methods.
Figure 7 shows the percentage of
businesses using each portfolio
method. The numbers add up to
more than 100% because firms use
multiple methods. The stars in the
graph
denote
a significant
difference
between
top
performers and poor performers.
Thus,
businesses
with
the
strongest portfolio performance
tend to use strategic approaches
and bubble diagrams more than
Figure 7: Popularity of R&D portfolio methods
businesses with weak portfolio
Adapted from Cooper et al. (1998)
performance. Next, each method
will be briefly explained.
Financial methods include various financial performance metrics, such as Net Present Value
(NPV), Expected Commercial Value (ECV), Return on Investment (ROI), and Economic Value
Added (EVA). For the NPV analysis the following data is required: the firm’s discount rate
(i.e. the cost of capital or the required rate of return), the length of the project, the amount
required to initiate the project, and the projected net cash flows to be received throughout
the life of the projects. As can be seen from the requited inputs the NPV method ignores
probability and risk. As opposed to this analysis the ECV method introduces the notion of
risks and probabilities. In order to calculate the expected commercial value the following
additional data is required: the probability of technical success, the probability of
commercial success, and the commercialization costs. Financial methods are used by 77.3%
of business, and for 40.4% of businesses this is the dominant method (Cooper et al., 1998).
Strategic approaches use the business’ strategy as the basis for allocating resources across
different types of projects. For instance, new product development projects are ranked or
rated on strategic fit. This method is used by 64.8% of businesses and for 26.6% this is the
dominant method (Cooper et al., 1998).
Bubble diagrams refer to the use of a two-dimensional scatterplots where a third variable is
represented by the size of the points/bubbles. Projects are plotted in this diagram and then
categorized according to the zone or quadrant they are in. Businesses plot various
parameters on these bubble diagrams in order to seek a harmonious portfolio. A popular
11
type of chart is the so called risk-reward diagram. Here reward (NPV) is plotted on one
dimension and the probability of success (technical, commercial, overall) is plotted on the
other. The size of the bubbles can represent for example annual resources or strategic fit. A
total of 40.6% of businesses use bubble diagrams and only for 8.3% of businesses this is the
dominant approach (Cooper et al., 1998).
Scoring models refer to scaled ratings that are added to yield project attractiveness scores.
These scores become the criteria used to make project selection and/or ranking decisions.
Examples of popular criterion are: reward, business strategy fit, strategic leverage,
probability of commercial success, and probability of technical success. These models are
used by 37.9% of businesses and in 18.3% this is the dominant decision method (Cooper et
al., 1998).
Checklists refer to the approach where projects are evaluated on a set of yes/no questions.
The answers to these questions are used to make project selection and/or ranking decisions.
Only 17.5% of businesses use this approach and for 2.7% of businesses this is the dominant
approach (Cooper et al. 1998).
Next to the aforementioned portfolio management methods, numerous other methods have
been discussed in literature. Among others these include probabilistic financial models (e.g.
Monte Carlo Simulation and decision trees), analytical hierarchy approaches (based on
paired comparisons of both projects and criteria), behavioral approaches (e.g. Delphi and QSort method), and real options theory (Cooper et al., 1999). According to McGrath, Ferrier,
and Mendelow (2004) option value in real options theory is “related to the preservation of
choices, meaning that a firm can take a variety of actions when more information is
available, rather than make a full commitment to a given path at the outset of the project or
initiative” (p. 87). From a real option perspective, it might be valuable to invest in R&D
projects with a negative NPV when early investments can provide information which reduces
uncertainty (Lint and Pennings, 2001). This approach is especially useful for projects with
high technical uncertainty (McGrath, 1997). The real options approach evolved from
developments in financial theory, in particular, the Black-Scholes (1973) model. Another
particular interesting method for the selection of a R&D project portfolio is Contingent
Portfolio Programming (CPP) which is developed by Gustafsson and Salo (2005). Contrary to
most other methods CPP incorporates the decision maker’s risk preferences, Moreover, CPP
extends other approaches in that it captures project synergies. These two elements are
important aspects of Markowitz’s modern portfolio theory, but often not addressed in R&D
portfolio selection methods.
It can be concluded that there is no single method to make good project selection and/or
ranking decisions. The first reason for this is that all methods are somewhat unreliable
(Cooper and Edgett, 2007). By using multiple methods in combination the unreliability issue
can be (partially) resolved. Moreover, a single method cannot provide enough insight, since
the goals of portfolio management are wide-ranging. For example, financial methods can
12
help to maximize the value of a portfolio, but it cannot ensure that the portfolio of projects
reflects the business’s strategy.
It can also be concluded that the portfolio methods require information which is commonly
part of the business case of a R&D project. A business case is used to obtain management
commitment and approval for investment in R&D projects through rationale of the
investment (Office of Government Commerce, 2010). More specifically, the business case is
the result of a thorough market, technical and financial analysis which is a deliverable for
projects in the second stage of the new product development process (see 1.2.3). A
preliminary market, technical, and business assessment is conducted for projects in an
earlier stage (see 1.2.3). The deliverables of these activities can also serve as input for
portfolio analyses, even though the data is more uncertain.
1.2.6
Conclusion
This chapter provided a review of R&D portfolio management including all new product
development business processes related to it (i.e. the business and product strategy,
roadmapping, product and solution development, and life cycle management). Next, an
overview of the most important findings from this part of the literature review will be given.
R&D portfolio management is concerned with the analysis of portfolios containing a set of
R&D projects, technology, and new products efforts currently funded and underway. The
overall goal of this business process is to link the corporate strategy and the management of
the aforementioned innovation efforts. Furthermore, R&D portfolio management is a
dynamic decision process, whereby projects are evaluated, selected, and prioritized in order
to reach the following potentially conflicting goals: strategic alignment, maximization of
value, and portfolio balance. This way it provides a coherent basis on which to judge which
projects should be undertaken. In order to make such project selection and/or ranking
decisions businesses use a combination of different portfolio methods. These include,
among others, financial methods, strategic approaches, bubble diagrams, scoring models,
and checklists. Multiple methods are used since all methods are somewhat unreliable.
Moreover, a single method cannot provide enough insight, since the goals of portfolio
management are wide-ranging. It should be noted that most methods do not incorporate
the decision maker’s risk preferences and do not capture project synergies. However, these
two elements are important aspects of Markowitz’s modern portfolio theory.
All aforementioned portfolio methods require information which is commonly part of the
business case of a R&D project. More specifically, the results from activities such as market,
technical and financial analysis serve as input for portfolio analyses. However, although this
information is generally available it does not mean that R&D portfolio management is a
routine decision making area.
Almost two decades ago Roussel, Saad and Erickson (1991) stated that R&D portfolio
management will grow in the 1990s to become a powerful tool that is vital to successful
business performance. However, R&D portfolio management has remained to be a popular
subject for academic research. This indicates that R&D portfolio management is a
13
challenging decision making area. Moreover, this indicates that there might be certain
factors that hinder the implementation of R&D portfolio management. The study by Cooper
et al. (2005) points out the following four specific characteristics that that make R&D
portfolio management a challenging decision making area: (1) comparisons must be made
between projects with different data quality, (2) the decision environment is very dynamic,
(3) R&D portfolio management deals with future events and opportunities, and (4) resources
to be allocated across projects are limited. The first three characteristics are closely linked to
the notion of data quality. Thus, data quality could be an important factor that hinders the
implementation of R&D portfolio management, and possibly slows down perceived and
actual benefit gains from this decision making process. Chapter 1.3 will deal with this issue in
more detail.
14
1.3
New Product Data Quality
As already explained in chapter 1.2, R&D portfolio management is a challenging decision
making area. Three unique features of R&D portfolio management that account for this are
of particular interest for this study. To be more specific, R&D projects, technology, and new
product efforts in a portfolio are at different stages of completion, nevertheless these R&D
projects and other innovation efforts compete against each other for resources. Thus, R&D
portfolio management involves making comparisons between projects with different data
quality. Secondly, R&D portfolio management deals with future events and opportunities. As
a result much of the information required to make decisions is highly uncertain. Thirdly, the
decision environment is very dynamic. This means that the status and prospects for projects
in the portfolio are constantly changing as new information becomes available.
As will be explained in this chapter, these three unique features have a strong relation with
new product data quality. First of all, section 1.3.1 will explain the term new product data
into more detail. The importance of new product data quality will be explained in section
1.3.2. Additionally, the concept of quality will be clarified with help of hierarchical
framework. Hereafter, this study will focus on ways to assess and improve new product data
quality. Finally, this chapter will be summarized in section 1.3.4.
1.3.1
New Product Data
The term ‘new product data’ will be used to depict the data collected for portfolio analyses
(see p. 12). From subsection 1.2.3 it is known that activities in the NPD process are
conducted by a cross-functional team. Moreover, NPD processes are used for new product
projects. Thus, new product data is collected for each new product project from the
different departments involved. This includes financial data (e.g. amount of funding and cost
of capital), marketing data (e.g. market share, market size, and commercial risk), and R&D
data (e.g. technology risk, non-staffing costs, and staffing costs).
New product data needs to be forecasted since it involves making statements about events
whose actual outcomes have not yet been observed (Wikipedia, 2010). For example,
statements about the market share at some specified future date has to be made while a
project is still in its development phase. There are many methods available that can be used
to forecast new product data and these forecasting techniques can be categorized in four
categories (Kahn, 2006; Armstrong, 2001). First of all, judgmental methods represent those
techniques that attempt to turn experience, judgments, and intuition into formal forecasts.
Examples of judgmental methods include: jury of executive opinion, Delphi method, and
decision trees. The second category includes those approaches that are based on the
customer and market research. The following four general classes exist within this category:
concept testing, product use testing, market testing, and pre-market testing. Time series
methods are also considered as a separate category. These techniques analyze historical
data as the basis of estimating future outcomes. Diffusion modeling, moving average, and
exponential smoothing are examples of time series methods. The fourth category consists of
those forecasting methods that use the assumption that it is possible to identify the
15
underlying factors that might influence the variable that is being forecasted, i.e. causal or
regression modeling. Methods that are part of this category include: (non) linear regression,
event modeling, and econometrics. Selection of relevant methods is highly dependent on
the forecast objectives and the conditions.
Forecasts are inherently risky and uncertain, since knowledge about future events is
imperfect (Knight, 1921). According to Knight (1921) risk applies to situations where the
outcome of the situation is not known, but the probability of occurring can be accurately
measured. In contrast, uncertainty applies to situations where the probabilities of outcomes
are completely unknown. Since risk involves a set of possible specific outcomes with known
possibilities it can be managed. Risk management is defined as “the identification,
assessment, and prioritization of risks followed by coordinated and economical application
of resources to minimize, monitor, and control the probability and/or impact of unfortunate
events” (Hubbard, 2009; p. 46). The AIM Global Data and Risk Management Survey (2005)
revealed that improving data quality is regarded as a key issue for risk management. It can
be concluded that new product data and consequently portfolio decisions always involve
some amount of risk and uncertainty. However, the amount of uncertainty partly depends
on the quality of the data. This means that poor new product data quality increases decision
making uncertainty. Portfolio decision making is even more uncertain when new product
data quality is completely unknown. The following section will focus on new product data
quality.
1.3.2
New Product Data Quality
1.3.2.1
Is New Product Data Quality Important?
The importance of new product data quality is acknowledged by at least two scientific
articles in the field of R&D portfolio management. The first section of this paragraph will
summarize the most important findings related to data quality of these studies. Since the
provided evidence is far from overwhelming the second section of this paragraph provides
proof on the importance of data quality from strategic decision making literature and
management information systems literature.
Research by Cooper et. al (2000) has identified several problem areas in portfolio
management. One of these problems is that portfolio decisions are often made in the
absence of solid information. According to Cooper et. al (2000) the problem is that “the
predevelopment work is not done well enough to provide the quality of information that
management needs to make sound decisions” (p. 8). Here, predevelopment work includes
the following activities: preliminary market assessment, preliminary technical assessment,
market study, business analysis, and financial analysis. It can be concluded that this problem
is clearly a data quality issue.
O’Connor (2004) discusses the implementation of portfolio management and states that this
is a major challenge for every organization. In this study two top challenges to portfolio
management implementation are identified which are strongly related to data quality. First
of all, managers assert that poor portfolio analyses are conducted due to emphasis on speed
16
and lack of good information. Defining and getting accurate data input for metrics is another
key problem for managers. These factors hinder the implementation of portfolio
management, and slow down benefit gains from this decision making process.
Portfolio Management as a Rational Strategic Decision Making Process
Strategic decisions are defined as “a specific commitment to action which is important, in
terms of actions taken, the resources committed, or the precedents set” (Mintzberg,
Raisinghani, and Theoret, 1976; p. 133). From this definition it can be argued that portfolio
management decisions are essentially strategic decisions, since portfolio management
requires the senior management of the business to make decisions about resource allocation
that critically affect organizational health and survival (Eisenhardt and Zbaracki, 1992).
Dean and Scharfman (1996, p.372) examined the relationship between strategic decision
making processes and decision effectiveness or “the extent to which a decision achieves the
objectives established by management”. One of the two constructs used to represent the
strategic decision making process was procedural rationality, defined as “the extent to which
the decision process involves the collection of information relevant to the decision and the
reliance upon analysis of this information in making the choice” (Dean and Sharfman, 1996;
p. 373). Table 2 provides an overview of the items that represent procedural rationality. In
this study an overall significant effect between procedural rationality and decision
effectiveness was detected. Thus, managers who collected information and used analytical
techniques made decisions that were more effective than those who did not. These findings
are in line with results from Elbanna and Child (2007). These authors argue that rational
processes are recognized as a central aspect of strategic decision making in the literature of
decision making. Findings from their research support this, since it is concluded that rational
processes have a dominant positive role in strategic decision effectiveness.
R&D portfolio management involves the collection of information about R&D projects (i.e.
new product data) and this information is analyzed with several portfolio methods which are
quantitative in nature (see 1.2.5.2). Therefore, it can be concluded that portfolio
management is a decision process with high procedural rationality. Consequently,
businesses with a portfolio management process in place will make more effective R&D
portfolio decisions than those businesses who do not. One of the major findings of The
Global Innovation 1000 (Jaruzelksi, Dehoff, and Bordia, 2005) is that financial returns on
innovation investment depend on the effectiveness of innovation processes, rather than the
magnitude of R&D spending. In other words, R&D portfolio management, recognized as an
important part of new product development, improves the effectiveness of innovation
processes and it is therefore an important driver of a company’s return on innovation
investment (Jaruzelksi, Dehoff, and Bordia, 2007).
Most rational decision processes, including R&D portfolio management, rely on information
technology to improve decision making efficiency. Moreover, information technology can
improve the effectiveness of that decision (Shim, Warkentin, Courtney, Power, Sharda and
17
Carlsson, 2002). The term management
Procedural rationality
information system (MIS) is commonly
The extent to which the group looks for
used to refer to information technology
information in making this decision.
that is intended to support decision
The extent to which the group analyzes
making. Lucey (2005) defined MIS as “a
relevant information before making a
decision.
system to convert data from internal and
The extent to which quantitative analytic
external sources into information, in an
techniques are seen as important in making
appropriate form, to managers at all
the decision.
levels in all functions to enable them to
The extent to which the process that had
the most influence on the group decision is
make timely and effective decisions for
described as analytical.
planning, directing, and controlling the
The extent to which the group is effective at
activities for which they are responsible”
focusing its attention on crucial information
(p. 2). Thus, the raw input of the MIS is
and ignoring irrelevant information.
considered as data and the processed
Table 2: items to describe procedural rationality.
Adapted from Dean and Sharfman (1996).
outcome of the system is considered as
information. Notice that in this study new
product data refers to the input of R&D portfolio information systems.
The success of information systems in general depends on the quality of the data (system
input) and the quality of the information (system output). It is obvious that decisions based
on high quality data and information have a better chance of advancing the business’s goals
(Redman, 1998). First the importance of data quality will be explained.
A widely accepted fact in computer science is that if the collected input data is invalid the
resulting output of the analysis will also be invalid. The expression ‘Garbage In, Garbage
Out’ (GIGO) is a popular expression to depict this phenomenon. It signifies that no matter
how sophisticated an information processing system is, the quality of the produced
information cannot be better than the quality of the data (Business Dictionary, 2010). On top
of this, poor data quality compromises decision making since decisions are no better than
the data on which they are based (Redman, 1998). Thus, in order to draw valid conclusions
from portfolio analyses the collected new product data must also be valid i.e. of high quality.
Next, the importance of information quality will be explained.
Information quality is of utmost importance since it determines the usefulness and success
of MIS (Teo and Wong, 1998; DeLone & McLean, 2003). If the information derived from MIS
is of low quality it can have a detrimental effect on management control and decision
making (Northrop, Kraemer, Dunkle and King, 1990). Findings from Teo and Wong (1998) are
consistent with this since information quality appeared to have a significant relationship
with organizational impact in their study. DeLone & McLean (1992) suggested a
comprehensive, multidimensional model of information systems success. Based on
numerous empirical and theoretical contributions of researchers, who have tested or
discussed this so called D&M IS Success Model, an updated model has been provided
(DeLone & McLean, 2003). The updated model, presented in figure 8, consists of six
interrelated dimensions of information systems success:
18






System quality; reflects the more engineering-oriented performance characteristics
of the information processing system. Usability, availability, reliability, and response
time are examples of qualities that are valued by users of a management information
system.
Information quality; refers to the quality of the information that the system
produces. Information quality attributes include among others accuracy, relevance,
reliability, timeliness, and understandability.
Service quality; refers to the overall support delivered by the service provider.
Use; recipient consumption of the output of an information system.
User satisfaction; an important means of measuring customers’ opinions of an
information system.
Net benefits; captures the balance of the positive and negative impacts of IS on
customers, suppliers, employees, organizations, markets, industries, economies, and
even societies as a whole.
Information
quality
Intention to
use
Use
Net benefits
System
quality
User satisfaction
Service
quality
Figure 8: D&M IS success model. Adapted from DeLone and McLean (2003)
The quality of the information output has a significant relationship with (1) the intention to
use information system reports and (2) user satisfaction (DeLone and McLean, 2003). These
two constructs are strongly related to information systems success. Thus, by increasing
output data quality more people will use the information system and the users will be more
satisfied.
It can be concluded that new product data quality is of great importance to R&D portfolio
managers. This is because R&D portfolio management is a decision process with high
procedural rationality and thus highly dependent on collected information and analytical
techniques. Consequently, decisions based on high quality data have a better chance of
advancing the business’s goals. Moreover, the quality of new product data has a positive
relationship with the intention to use the results that are retrieved from portfolio analyses.
Also, R&D portfolio managers will be more satisfied with the portfolio management
information system when new product data is of high quality.
19
1.3.2.2
What is Quality?
The concept of quality is not easy to define. According to Reeves and Bednar (1994) a
universal definition of quality does not exist; rather, different definitions are appropriate
under different circumstances. Wang and Strong (1996) developed a framework that
captures the aspects of data quality that are important to data customers. In this study data
quality is defined as “data that are fit for use by data consumers” (p. 6). An empirical
approach has been used in order to capture the data quality dimensions, defined as “a set of
data quality attributes that represent a single aspect or construct of data quality” (p. 6), that
are important to data consumers. The hierarchical framework consists of twenty data quality
dimensions which are sorted into a set of four categories. Intrinsic data quality denotes that
data has quality in its own right. Contextual data quality highlights the requirement that data
quality must be considered within the context of the task at hand; it must be relevant.
Representational data quality and accessibility data quality emphasize the importance of
computer systems that store and provide access to data. Figure 9 shows the conceptual
framework of data quality. The definitions of all the data quality dimensions can be found in
appendix A. Follow-up research by Lee, Strong, Kahn and Wang (2002) has provided further
evidence that the dimensions within this framework provide comprehensive coverage of the
multi-dimensional data quality construct.
Data Quality
Intrinsic Data
Quality
Contextual Data
Quality
Representational
Data Quality
Accessibility
Data Quality
Believability
Accuracy
Objectivity
Reputation
Value-added
Relevancy
Timeliness
Completeness
Ease of operation
Appropriate amount
of data
Flexibility
Interpretability
Ease of
understanding
Representational
consistency
Concise
representation
Accessibility
Cost-effectiveness
Acces security
Figure 9: Data quality framework. Adapted from Wang and Strong (1996)
The data quality framework of Wang and Strong (1996) makes no classification of the four
data quality categories (intrinsic, contextual, representational, and accessibility data quality)
with regard to importance. However, due to time limitations of the master’s thesis (a
maximum of 21 weeks) this study will only focus on intrinsic data quality, hence the other
three data quality categories are beyond the scope of this study. Next, the reason for
choosing intrinsic data quality will be explained.
Intrinsic data quality is considered as fundamental since this category entails intrinsic data
properties. Intrinsic data properties are properties that the data has of itself, independently
20
of other things, including its context (Weatherson, 2006; Wikipedia, 2010). The other three
data categories entail extrinsic properties, which are properties that depend on the data’s
relationship with other things (Weatherson, 2006; Wikipedia, 2010). For new product data
quality this means that contextual, representational, and accessibility data quality heavily
depend on their relationship with type of analyses, and type of information technology while
the intrinsic data quality dimensions do not. In other words contextual, representational,
and accessibility data quality are company dependent. Since the goal of this research is to
provide generic solutions (i.e. solutions that applicable to multiple companies) a focus on
intrinsic data quality is justified.
1.3.3
Assessing and Improving New Product Data Quality
This paragraph will focus on intrinsic new product data quality. More specifically, subsections 1.3.3.1 to 1.3.3.4 respectively deal with assessing and improving believability,
accuracy, objectivity, and reputation of new product data. While these four quality
dimensions are separately discussed, it does not mean that they are independent of each
other. On the contrary, it is very likely that the quality dimensions influence each other. For
example, data from a source that have proofed to be accurate over a long period of time will
probably positively affect the trustworthiness of that source, which is a sub-dimension of
believability.
1.3.3.1
Believability
Wang and Strong (1996) define data believability as “the extent to which data are accepted
or regarded as true, real and credible”. Their survey shows that data consumers consider
believability as an essential aspect of data quality. Believability is itself decomposed into subdimensions. Lee, Pipino, Funk, and Wang (2006) propose three sub-dimensions, namely
believability: (1) of source, (2) compared to internal common-sense standard, and (3) based
on temporality of data.
According to Prat and Madnick (2007) the believability of a data value depends on its origin
(source) and consequent processing history. In other words, it depends on the data
provenance, defined as “information that helps determine the derivation history of a data
product, starting from its original sources” (Simmhan, Plale, and Gannon, 2005). The
literature on provenance acknowledges data quality as a key application of provenance (Prat
and Madnick, 2007). The same authors developed a precise approach to measure data
believability and explicitly made use of provenance-based measurements. Table 3 provides
an overview of all dimensions of data believability that are used in this provenance-based
believability assessment.
21
Dimensions
1. Trustworthiness of source
Definitions
The extent to which a data value originates from trustworthy
sources.
2. Reasonableness of data
The extent to which a data value is reasonable (likely).
2.1 Possibility
The extent to which a data value is possible.
2.2 Consistency
The extent to which a data value is consistent with other
values of the same data.
2.2.1 Consistency over The extent to which different sources agree on the data value.
sources
2.2.2 Consistency over The extent to which the data value is consistent with past data
time
values.
3. Temporality of data
The extent to which a data value is credible based on
transaction and valid times.
3.1 Transaction and valid The extent to which a data value is credible based on
times closeness
proximity of transaction time to valid time.
3.2 Valid times overlap
The extent to which data value is derived from data values
with overlapping valid times.
Table 3: dimensions of believability (Prat and Madnick, 2007)
Based on the believability assessment of Prat and Madnick (2007) the following
generalizations to new product data believability can be made:
 Trustworthiness of the data source is positively related to new product data
believability. In order to be able to assess trustworthiness the data source needs to
be visible and traceable. Moreover, a reputation system should be in place where
trustworthiness values are saved.
 Believability of new product data is enhanced if different sources agree on the data
value.
 Believability of new product data is enhanced if the data value is consistent with past
data values. It should be noted that past data values are generally only available for
certain types of new products which include cost reductions, product improvements,
and line extensions.
 Proximity of transaction time to valid time is positively related to new product data
believability. This means that long-term forecasts are less believable than short-term
forecasts.
1.3.3.2
Accuracy
Wang and Strong (1996) define data accuracy as “the extent to which information is correct
and reliable”. The book by Lee et al. (2006) provides general guidelines for measuring data
accuracy, but these guidelines are not really helpful to assess and improve the accuracy of
new product data. Next, several methods for improving data accuracy will be explained.
These methods are mostly based on several principles for improving the accuracy of
forecasts from the book Principles of Forecasting (Armstrong, 2001).
22
Combination of Forecasts
Forecasting accuracy can be substantially improved through the combination of multiple
forecasts (Clemen, 1989). In other words, by averaging independent forecasts the deviation
of the actual value from the forecasted value decreases. According to Armstrong (2001) this
is the case in all situations studied to date. By examining 30 studies, the same author
estimated the typical gain from combining forecasts. On average, combining reduced
forecast errors by 12.5% and in every case the error was reduced.
Combining forecasts improves accuracy since the component forecast nearly always contains
some useful independent information (Bates and Granger, 1969). This independent
information may be of two kinds: (1) one forecast is based on variables or information that
the other forecast has not considered, (2) the forecast makes a different assumption about
the form of the relationship between the variables. However, forecasts are mostly highly
positively correlated which means that independent information is limited. Armstrong
(2001) proposes two ways to generate independent forecasts: use different data and/or
different methods. The more the data and the methods differ, the greater the expected
improvement in accuracy over the average of the individual forecasts (Armstrong, 2001).
Furthermore, when sufficient resources are available, it is best to combine forecasts from at
least five methods (Armstrong, 2001). In general, combined forecasts are more accurate as
the number of methods increases, although the rate of improvement decreases as more
methods are added.
When combining forecasts, there might be a difference in the accuracy of the forecasts, and
dependence among the forecasts. Weighted averages attempt to take these factors into
account. According to Armstrong (2001) there are broadly two ways to combine forecasts,
which will be explained briefly. The simplest and easiest approach is to use equal weights.
Equal weighting is found to be accurate for many types of forecasting and should be used
unless there is strong evidence to support unequal weighting of forecasts. The second
approach, subjective weighting, should only be used when those doing the weighting have
solid information about the relative accuracy of the sources. For example, a heavier weight
should be assigned if there is strong evidence that a particular method is more accurate than
others, or in case methods are more appropriate based on domain knowledge. In order to
protect against people’s biases, this approach should be used in a structured way and details
of the procedure should be recorded. Thus, a formal procedure should be used to combine
forecasts.
Obviously, combining forecasts is only possible if there is more than one sensible source of
forecasts. According to Armstrong (2001) combining forecasts is most useful when there is
much uncertainty. More specifically, combining is useful when it is uncertain which method
is best. For instance, when the future is expected to be especially turbulent, or when a new
situation is encountered. Also, combining forecasts can reduce error when the forecast
situation is uncertain. For example, in case of long-range forecasts, or in case of new product
forecasting.
23
Decomposition of Forecasts
Decomposition methods are designed to improve accuracy by splitting the judgmental task
into a series of smaller and cognitively less demanding tasks, and then combining the
resulting judgments (Lawrence, Goodwin, O’Connor, and Önkal, 2006). In other words,
estimation accuracy can be improved by decomposing the estimation problem into subproblems that can be more easily or confidently estimated, and then aggregating these subproblems based on sound rules (MacGregor, 2001). This implies that decomposition should
only be used for problems for which the target value is more uncertain than the component
values. Therefore one of the principles of decomposition by MacGregor (2001) is: “use
decomposition when uncertainty is high; otherwise use global or holistic estimation” (p. 5).
Two forms of decomposition used in forecasting can be distinguished (MacGregor, 2001).
One is multiplicative decomposition, where the breakdown of the problem is multiplicative
(e.g. sales forecast = market size forecast x market share forecast). Another form of
decomposition is segmentation, where the breakdown of the problem is additive (e.g. sales
forecast=region 1 sales forecast + region 2 sales forecast). No studies to date directly
compared the two forms of decomposition, which means that the choice of decomposition
form will be based on the estimator’s judgment and the characteristics of the forecasting
problem (MacGregor, 2001).
Improving Reliability of Forecasts
Unreliability is error introduced into the forecast by the natural inconsistency of the human
judgment process (Stewart, 2001). Therefore, judgmental forecasts are affected by
unreliability and it the long run it can only reduce the accuracy of forecasts (Stewart, 2001).
Stewart (2001) suggests three principles for improving reliability of judgmental forecasts
from which great benefits for new product data accuracy can be expected. This is because
these principles are especially useful when the forecasting environment contains a high
degree of uncertainty.
First of all, mechanical methods should be used to process information. In general,
mechanical methods refer to computerized models. The purpose of this principle is “to
improve reliability of information processing by substituting an analytical process for an
intuitive one” (Stewart, 2001; p. 94). In other words, highly analytical forecasting processes
will generally result in more reliable forecasts that those that are highly intuitive.
Secondly, in order to improve the reliability of information processing several forecasts
should be combined. This method for improving forecasting accuracy is already described in
the first section of this paragraph.
Another way to improve the reliability of information processing is by requiring justification
of forecasts. Similar to the first principle, requiring justification of forecasts will make the
forecasting process more analytical (Stewart, 2001). This is also acknowledged by Kahn
(2006) who states that “in each of the stages, establishing and revising assumptions is an
important, if not preeminent step across NPD stages in the course of forecasting new
products” (p. 32). According to Kahn (2006) new product forecasting should be viewed as a
24
process of assumption management, which involves the generation, translation, and tracking
of assumptions and their metrics. Additionally, decomposition methods can also help to
improve forecasting reliability since these methods offer a structured framework for the
analysis. Moreover, decomposition methods leave an audit trail that can be examined later
to retrace the arguments behind the judgments (Salo and Bunn, 1995).
Based on the literature on forecasting accuracy the following generalizations to new product
data accuracy can be made:
 Accuracy of new product data can be substantially improved through the
combination of multiple forecasts.
 Forecasting accuracy can be improved by decomposing the estimation problem into
sub-problems, and then aggregating the sub-problems.
 Reliability and thus accuracy of new product data can be improved through the use
of mechanical methods to process information.
 Assumption management, or the justification of forecasts, is an important step in the
course of forecasting new products to ensure accurate new product data.
1.3.3.3
Objectivity
Wang and Strong (1996) define data objectivity as “the extent to which information is
unbiased, unprejudiced, and impartial”. According to Tyebjee (1987) new product data is not
objective since the new product forecasting process generates an upward bias in the
forecast of product performance and a downward bias in the forecast of project costs. To be
more specific, three behavioral biases influence the forecasts made during a new product
development process, namely the post-decision audit bias, the advocacy bias, and the
optimism bias (Tyebjee, 1987).
The post-decision bias results from the fact that the forecasting method can only be audited
against those products that were actually commercialized. Most likely, a disproportional
large number of the forecasts made for commercialized products overestimated actual
product performance. Therefore, the post-decision bias causes a tendency towards
disappointment in actual product performance. (Tyebjee, 1987)
The advocacy bias and optimism bias are two major causes for the high level of
misinformation about product cost, benefits, and risks (Flyvbjerg, 2007a). The advocacy bias,
or strategic misinterpretation, reflects the tendency of forecasters to promote their project
by deliberately making over-optimistic forecasts in order to improve their bargaining
leverage for resources (Flyvbjerg, 2007a; Tyebjee, 1987). According to Flyvbjerg (2007a) this
bias influences forecasts in the case of significant organizational pressure. Moreover,
engaging in forecasting activities creates optimism in what the plan can achieve (Tyebjee,
1987; Lovallo and Kahneman, 2003; Flyvbjerg, 2007a). In other words, forecasters
overestimate the likelihood of positive events and underestimate the likelihood of negative
events. This systematic tendency to be over-optimistic is referred to as optimism bias and is
also known as the planning fallacy. It is a form of cognitive bias – errors in the way the mind
25
processes information – and thus not the result of any conscious factor (Tyebjee, 1987;
Lovallo and Kahneman, 2003; Flyvbjerg, 2007a).
It can be concluded that forecast biases due to optimism and advocacy result in inflated
sales forecasts and deflated cost forecasts in new product proposals. This is especially true if
the rewards associated with having the plan approved are not adequately balanced by the
negative consequences of not delivering on the expectations generated in the plan. Tyebjee
(1987) suggests that reward structures that include disincentives for not meeting forecasts
might reduce both aforementioned biases in new product forecasts. Several academic
articles on sales forecasting (Mentzer, Bienstock and Kahn, 1999; Moon, Mentzer, Smith and
Garver, 1998; Mentzer and Kahn, 1997) support this by suggesting that forecasting
performance measures should be incorporated into job performance evaluation criteria.
According to Lovallo and Kahneman (2003) the advocacy bias and optimism bias are caused
by forecasters taking an “inside view”, where the focus is on the elements of the specific
planned action rather than on the outcomes of similar actions already completed.
Forecasters should therefore use an “outside view”, which means that analysts use all the
distributional information (e.g. probability bands, and risk) that is available from comparable
past initiatives (Lovallo and Kahneman, 2003). This approach is much more likely to yield a
realistic estimate (Lovallo and Kahneman, 2003). Reference class forecasting is a method for
systematically taking an outside view on forecasting problems (Flyvbjerg, 2007b). More
specifically, this method consists of the following three steps:
1. Identify a relevant reference class of past, similar projects;
2. Establish a probability distribution for the selected reference class;
3. Compare the specific project with the reference class distribution, in order to
establish the most likely outcome for the specific project (Flyvbjerg, 2007b).
Another possible way to increase the objectivity of forecasts is through the combination of
multiple forecasts. Sanders and Ritzman (2004) argue that the integration of judgmental and
quantitative forecasts decreases the degree of bias. However, combined forecasts should be
independently generated (Sanders and Ritzman, 2004). These findings are in line with
Armstrong (2001), who states that objectivity is enhanced if several independent forecasts
are combined. Independent forecasts can be generated in the following three ways: by
analyzing different data, by using different forecasting methods, or by using independent
forecasters. More information about combining forecasts can be found in section 1.3.3.2.
Based on the literature on forecasting objectivity the following generalizations to new
product data objectivity can be made:
 Reward structures that include disincentives for not meeting forecasts will reduce
biases in the forecast and thus new product data objectivity will improve.
 Objectivity of new product data is enhanced if forecasters take an “outside view” on
the forecast problem. For example, by using the reference class forecasting method.
It should be noted that relevant reference classes of past, similar projects are
26

generally only available for certain types of new products which include cost
reductions, product improvements, and line extensions.
Objectivity of new product data is enhanced if a combination of several independent
forecasts is used.
1.3.3.4
Reputation
Wang and Strong (1996) define data reputation as “the extent to which information is highly
regarded in terms of its source or content”. Reputation is similar to the concept of
trustworthiness, a sub-dimension of believability (Prat and Madnick, 2007). Consequently,
reputation will not be separately discussed.
1.3.3.5
Other/Overall
This sub-section will provide an overview of factors that might influence intrinsic data
quality. However, the sources only pointed out that these factors affect data quality in
general.
First of all, new product data quality may be improved by making the uncertainty of
forecasts explicit. A way to do this is to provide interval forecasts, which usually consist of
upper and lower limits associated with a prescribed probability (Chatfield, 1993). According
to Chatfield (1993) interval forecasts deliver the following advantages over point forecasts:
(1) possibility to assess future risk, (2) opportunity to enable different strategies to be
planned for the range of possible outcomes indicated by the interval forecast, and (3)
possibility to compare forecasts from different methods more thoroughly and explore
different scenarios based on different assumptions.
Secondly, measuring and analyzing forecast performance may have a positive effect on new
product data quality. It provides the opportunity to identify whether changes in the
development and application of forecasts are contributing, or hindering, business success
(Moon et al., 1998). In other words, by establishing multidimensional metrics to measure
forecast performance sources of errors can be isolated and targeted for improvement
(Mentzer and Kahn, 1997; Moon et al., 1998; Mentzer et al., 1999).
Thirdly, new product data quality may be improved by involving all key stakeholders/experts
from different departments in new product forecasting. According to Moon et. al (1998)
companies forecast most effectively when a cross-functional forecasting approach is used.
This means that input is obtained from people in different functional areas, each of whom
contributes relevant information and insights that can improve overall accuracy (Moon et. al,
1998).
The degree to which rules, policies, and procedures govern the portfolio management
and/or new product forecasting work activities (i.e. process formality) may also influence
new product data quality. According to Tatikonda and Montoya-Weiss (2001) process
formality refers to “the existence of an overall organizational process and structure for the
project. Process formality may provide the following benefits: (1) it may aid development
effectiveness, since a work process with controls and reviews provides sense of structure
and sequence to the work, (2) rules and reviews provide motivation and allow personnel to
27
Current
New
Market
assess their work activities and progress, and (3) it can aid in promoting cross-functional
communication and coordination (Tatikonda and Montoya-Weiss, 2001).
Another factor that influences
new product data quality is the
Product Technology
forecasting strategy. Booz, Allen
and Hamilton (1982) have
Current
New
identified six categories of new
Sales analysis
products in terms of their
(Cost reductions,
Life-cycle analysis
newness to the company and
Product
(Line Extensions)
the marketplace (see appendix
improvements)
B). According to Kahn (2006)
Customer and market
Scenario analysis
each type of new product
analysis
(new-to-thereflects different scenarios and
(New uses, new
company, new-toneeds
for
new
product
markets)
the-world)
forecasting. Therefore, not all
Figure 10: The product-market matrix including
forecasting techniques are
forecasting strategies. Adapted from Kahn (2006).
appropriate
for
every
forecasting situation. Kahn
(2006) proposes the following four new product forecasting strategies: (1) sales analysis, (2)
life-cycle analysis, (3) customer and market analysis, and (4) scenario analysis. Figure 10
shows an overview of these strategies within the product-market growth matrix from Ansoff
(1957).
Finally, new product data quality may improve over the phases of the innovation process.
Projects at a late stage of completion (e.g. stage 3 in fig. 4 p. 5) have already undertaken a
thorough market, technical, and financial analysis. Moreover, the development of the
product and of detailed test, marketing, and operation plans have started. Thus, more and
better information becomes available over project progression.
1.3.4
Conclusion
This chapter started with an introduction to new product data quality. Hereafter, the
importance of new product data quality was explained. Moreover, the concept of quality
was defined. Finally, this chapter gave an overview of possible methods for assessing and
improving data quality. Next, an overview of the most important findings from this final part
of the literature review will be given.
In this study the term ‘new product data’ is used to depict the data collected for portfolio
analyses. This data is collected for each new product project and includes financial data,
marketing data, and R&D data. New product data is inherently risky and uncertain since it
needs to be forecasted. Clearly, knowledge about future events is imperfect. This means that
new product data and consequently portfolio decisions always involve some amount of risk
and uncertainty. However, the amount of uncertainty partly depends on the quality of the
data.
28
R&D portfolio management is a strategic decision making process with high procedural
rationality. This means that it is a decision process which involves the collection of
information relevant to the decisions and the reliance upon analysis of this information in
making a choice. Consequently, decisions based on high quality data have a better chance of
advancing the business’s goals.
In this study data quality is defined as data that are fit for use by data consumers. Data
quality consists of twenty data quality dimensions which are sorted in the following four
categories: (1) intrinsic data quality, (2) contextual data quality, (3) representational data
quality, and (4) accessibility data quality. This study only focuses on intrinsic data quality,
hence the other three categories are beyond the scope of this research project.
Intrinsic data quality denotes that data has quality in its own right. This means that this
category entails intrinsic data properties, which are properties that the data has of itself,
independently of other things, including its context. Intrinsic data quality consists of the
following data quality dimensions: (1) believability, (2) accuracy, (3) objectivity, and (4)
reputation. For each of these dimensions several factors are identified which can be used to
assess or improve new product data quality. Moreover, an overview of factors that might
influence new product data quality in general is provided.
It can be concluded that new product data quality issues increase the difficulty of R&D
portfolio decision making. Moreover, It is a key factor that hinders the implementation of
R&D portfolio management, and slows down perceived and actual benefit gains from this
decision making process. Fortunately, new product data quality can be improved. Welldesigned forecasts can help managers to understand future risks so they can make better
business plans that inform ongoing R&D portfolio decision making. Thus, competence in
forecasting can improve new product data quality and thereby hurdle the implementation
problem of R&D portfolio management.
29
2.
Conceptual Model
The diagram in figure 11 suggests a conceptualization of possible factors that might affect
new product data quality. It is not meant to be complete, but rather an initial
conceptualization to organize thinking about factors that influence new product data quality.
It was developed as an outcome of the literature search.
New Product Forecasting Process - The independent variable, new product forecasting
process, is presumed to affect new product data quality. The new product forecasting
process includes the following factors: data provenance, project characteristics, process
formality, and forecasting methods. The literature review provides a thorough explanation
of each factor in section 1.3.3.1 to 1.3.3.5.
Firm Factors – The relationship between the independent variable and the dependent
variable is likely to be influenced by firm factors, or characteristics of a firm. For example,
limited resources will oppose certain forecast strategies or techniques. The goal is to
examine how firm factors mediate the relationship between new product forecasting and
new product data quality, not how firm factors alone influence new product data quality.
Intrinsic New Product Data Quality – The dependent variable in this study is intrinsic new
product data quality. New product data refers to the input data for the portfolio analysis. For
example, the net cash flow, the discount rate, and the time of the cash flow serve as inputs
for a popular portfolio management technique, net present value analysis. The quality of the
input data is defined as data that are fit for use by portfolio managers, with a focus on the
intrinsic aspect of data quality (believability, accuracy, objectivity, and reputation).
R&D Portfolio Information and Decision Quality – The quality of the R&D portfolio analysis
output depends on the new product data quality and the R&D portfolio management
process (e.g. method of analysis and process formality). This relationship and its variables
are beyond the scope of this research.
(Mediating)
Firm Factors
New Product
Forecasting
Process
Intrinsic New
Product Data
Quality
R&D Portfolio
Management
Process
Figure 11: Conceptual Framework
30
R&D Portfolio
Information and
Decision Quality
3.
Methodology
3.1
Research Setting
3.1.1
Introduction
The research will be carried out in cooperation with Bicore, a specialized business service
provider. A more exhaustive company description can be found in appendix C. Bicore is
currently developing an innovation portfolio management tool, called Flightmap. Version 1
of this tool is commercialized in early 2010. Flightmap is a web portal that provides a
business integrated, multidisciplinary portfolio management system. It generates analyses
and insights in order to formulate recommendations for a realistic and balanced portfolio.
With such a strong portfolio strategic business goals can be achieved, the return on
investment (ROI) can be improved, and risks can be influenced and diversified. The results
are shown to the users in a logical and structured way. In the third quarter of 2010 Bicore
wants to release the second version of Flightmap, which includes some form of data quality
assessment.
3.1.2
Sample
A representative part of a population (i.e. sample) will be selected based on a nonprobability
sampling procedure. To be more specific, a judgmental sampling method will be used, which
is a form of purposive sampling. According to Cooper and Schindler (2006) this type of
sample is appropriate when used in the early stages of an exploratory study. Sample
members are selected based on the following criteria: high research and development
efforts, use of formal NPD processes including portfolio management, and willingness to
cooperate in research. DSM, Océ, Philips and DAF conform to the aforementioned criteria
and therefore these four Dutch multinationals will act as the sample for the study.
Appendix D provides a short company description for each of these firms.
3.2
Research method
3.2.1
Introduction
According to Cooper and Schindler (2006) there are eight descriptors of research design.
Table 4 provides an overview of the chosen designs for this research. Next, a short
description and/or argumentation per category are/is given.
The research will be exploratory. Academic literature provides limited information on data
quality in the field of innovation sciences. Also there is no proven conceptual framework for
assessing quality of new product forecasts. Therefore, a hypothetical model will be designed
based on academic literature from various fields. Furthermore, the preliminary model will be
verified and enhanced by incorporating “best practices” regarding data quality from DSM,
Océ, Philips and DAF.
31
Primary data will be collected by means of communication, contrary to an observatory
approach. In the communication study, the researcher questions the subjects and collects
their responses. This is the appropriate data collection approach, since information about
opinions, attitudes, expectations, motivations, knowledge, and past events are needed
(Cooper and Schindler, 2006). This topic will be further elaborated in chapter 4.1 and 4.2.
The investigator will not be able to manipulate variables and can only report what has
happened or what is happening. This type of research is called ex post facto.
The purpose of the study is to explain relationships among variables, thus the study is causal.
The research tries to find answers to questions like: how do incentives reduce behavioral
biases which deteriorate data quality? Or, does a greater number of new product forecast
techniques lead to higher new product forecast accuracy?
The study will be cross-sectional. In other words, the study will be carried out once and
therefore represents a snapshot of one point in time (Cooper and Schindler, 2006). The
constraint of time imposes the need for this type of analysis.
Valuable insights for problem solving, evaluation, and strategy are needed and therefore
emphasis is placed on detail, secured from multiple sources of information (Cooper and
Schindler, 2006). Case studies can provide this kind of information by placing emphasis on a
full contextual analysis.
The research will be conducted under actual environmental conditions (i.e. field conditions).
Furthermore, participants should perceive no deviations from everyday routines.
Category
Research Design
The degree to which the research question has been crystallized
Exploratory study
The method of data collection
Communication study
The power of the researcher to produce effects in the variables Ex post facto
under study
The purpose of the study
Causal study
The time dimension
Cross-sectional
The topical scope (breadth and depth) of the study
Case
The research environment
Field setting
The participants perceptions of research activity
Actual routine
Table 4: Descriptors of research design. Adapted from Cooper and Schindler (2006).
3.2.2
Survey
The choice for a communication approach means that people will be surveyed. A survey is a
measurement process which is used to collect information during a (highly) structured
interview, with or without a human interviewer. By providing each participant with the same
survey one can obtain comparable data between subjects. Using the correct statistical
measures, survey findings and conclusions can be generalized to larger populations (Cooper
& Schindler, 2006).
Surveys have certain advantages as a primary data-collecting approach because of their
versatility. Different types of (abstract) information can be gathered by questioning
participants. Also, some well-chosen questions can provide information that would be much
32
more difficult to obtain by observation. Furthermore, surveying by telephone, mail, or
computer (internet, e-mail) makes it possible to reach participants over large distances
without any extra time or costs (Cooper & Schindler, 2006).
Semi-structured interviews
A specific type of survey is the semi-structured interview. “A semi-structured interview is a
verbal interchange where one person, the interview, attempts to elicit information from
another person by asking questions. *…+ semi-structured interviews unfold in a
conversational manner offering participants the chance to explore issues they feel are
important” (Longhurst, 2003, p. 117). Semi-structured interviews are conversational and
informal in tone, and allow for an open response from respondents. There are various
advantages of semi-structured interviews compared to other survey methods. The greatest
value lies in the depth of information and detail that can be secured. The interviewer has
more possibilities to improve the quality of the information received than with another
method. During the interview the interviewer can adjust the language according to the
specific situation and participant. Also because the interviewer receives the information
directly from the participant, vocally and visually, he can provide immediate feedback, refine
questions, and ask further if an answer is unclear (Cooper & Schindler, 2006).
Preparation of the interview is essential. The interviewer needs to be acquainted with the
topic of interest, and has a clear understanding of the goal of the interview. A list of actual
questions, or at least different themes that need to be addressed, ensures that one covers
the whole topic and receives all the information required (Longhurst, 2003). Next to that, a
recording of the interview guarantees that all information is stored.
Data Collection
By conducting semi-structured interviews qualitative data will be collected. Qualitative data
is “data in the form of descriptive accounts of observations or data which is classified by
type” (Crowther and Lancaster, 2009; p. 75). A qualitative research technique is appropriate,
since what is to be measured and evaluated (data quality) is qualitative in nature (Crowther
and Lancaster, 2009). This is the case for most aspects of data quality (e.g. believability,
objectivity, and reputation), but not all. For example, accuracy could be measured and
evaluated by quantitative techniques. However, new product data quality involves forecasts
which make it hard to measure accuracy with quantitative techniques (although not
impossible). For example, a forecasting method can be audited only against products that
were actually introduced, not against projects that were killed in the NPD process.
Moreover, qualitative data collection fits better with the relatively small sample used in this
research.
33
3.3
Interview Development
The main goal of the semi-structured interview is to investigate how companies manage the
quality of new product data. Moreover, possible ways to improve new product data quality
are explored. The semi-structured interview is built up in three parts. The questions in the
first part assess how companies manage their R&D portfolio. The second part deals with how
companies organize their new product forecasting process. The final part of the interview
focuses on the main goal of this study: assessing and improving new product data quality.
The complete questionnaire can be found in appendix E. Next, the theory behind the
questions will be clarified. It should be noted that not all interview questions are discussed,
since some questions merely serve to verify the validness and completeness of the
questionnaire.
The portfolio management process
The first part of the survey focuses on the portfolio management process. The first set of
questions (question 2 in appendix E) are based on a study by Cooper et al. (1998) that
revealed best practices for managing R&D portfolios. R&D portfolio management best
practices refer to portfolio “methods and techniques that have consistently shown results
superior than those achieved with other means, and which are used as benchmarks to strive
for” (Business dictionary, 2010). Insight in the portfolio management process can be gained
by asking which best practices are actually followed. Next, the best practices study by
Cooper et al. (1998) will be briefly explained.
To capture business’s portfolio performance Cooper et al. (1998) constructed multiple
metrics which include decision effectiveness, decision efficiency, portfolio balance, and
strategic alignment of the portfolio. Businesses with the best portfolio performance, i.e.
businesses that score best on the aforementioned metrics:
 Recognize portfolio management as an important task in the business;
 Have an explicit, established method for portfolio management;
 Are supported by management;
 Feature very clear and well-defined rules and procedures for portfolio management;
 Consider all projects together and threat these as a portfolio;
 Consistently apply their portfolio method to all appropriate projects;
 Use a combination of different portfolio methods;
 Handle portfolio decisions at meetings in which managers discuss projects as a group,
use their best judgment and make decisions;
 Use formal approaches to portfolio management.
The second set of questions (question 3 in appendix E) deal with process formality, which is
one of the R&D portfolio best practices. Process formality analyzes how the portfolio
management process is executed. It represents the degree to which rules, policies, and
procedures govern the portfolio management work activities (Tatikonda and Montoya,
2001). This scale measures to what degree R&D portfolio management rules and procedures
are (1) formalized via documents, (2) actually followed, and (3) if formal reviews are held.
34
Finally, the relations or dependencies between the portfolio management process and new
product data quality are determined by literally asking: “Which relations/dependencies do
you see between the portfolio management process and new product data quality?”
(question 5 in appendix E).
The new product forecasting process
The second part of the survey focuses on the new product forecasting process. The new
product forecasting process refers to all related activities or tasks that are in place to
forecast new product data. Thus, new product forecasting does not only involve the
forecasting of market data but also the forecasting of costs, resources, project duration and
so on.
The first set of questions (question 6 in appendix E) are based on several studies on the
subject of sales forecasting, which is an important part of new product forecasting. The
authors of these studies suggest various ways to improve the sales forecasting process.
Insight in the new product forecasting process can be gained by asking to which extend
these (sales forecasting) recommendations are followed. Next, an overview of all relevant
recommendations as identified by academic literature will be provided.
 Typically, forecasts are prepared via a formal/routine process with clear and precise
instructions (Mentzer and Kahn, 1997).
 Forecasting should be used to facilitate business planning to improve forecasting
effectiveness (Mentzer et al., 1999). Moreover, forecast accuracy will be emphasized
and game-playing minimized (Moon et al., 1998).
 A cross-functional approach to forecasting results in more accurate and relevant
forecasts (Moon et al., 1998 ; Mentzer et al., 1999).
 Including forecast performance in individual performance plans and reward systems
results in more accurate and credible forecasts (Mentzer and Kahn, 1997; Moon et
al., 1998 ; Mentzer et al., 1999).
 By establishing multidimensional metrics to measure forecast performance sources
of errors can be isolated and targeted for improvement (Mentzer and Kahn, 1997;
Moon et al., 1998; Mentzer et al., 1999).
 Enabling access to relevant information across functional areas improves forecasting
effectiveness (Mentzer et al., 1999).
The second set of questions (question 7 in appendix E) deal with process formality, which is
one of the sales forecasting best practices. Process formality analyzes how the new product
forecasting process is executed. It represents the degree to which rules, policies, and
procedures govern the new product forecasting work activities (Tatikonda and Montoya,
2001). This scale measures to what degree new product forecasting rules and procedures are
(1) formalized via documents, (2) actually followed, and (3) if formal reviews are held.
The relations or dependencies between the new product forecasting process and new
product data quality are determined by literally asking: “Which relations/dependencies do
35
you see between the new product forecasting process and new product data quality?”
(question 9 in appendix E).
Assessing and improving new product data quality
The third part of the interview focuses on assessing and improving new product data quality.
The first set of questions (question 11–11d in appendix E) try to determine what activities
are undertaken by companies to assess the quality of new product data. The question is split
up in four parts to ensure that all data quality dimensions are dealt with.
Question 12 in appendix E consists of a list of statements about possible ways to improve
new product data quality. Representatives of the participating companies are asked to rate
on a scale from 1–5 to what degree new product data quality is influenced by each principle.
Most of these principles on how to improve new product data quality are directly extracted
from the literature review. To be more specific, these can be found in the conclusions of
paragraph 1.3.3.1 to 1.3.3.5.
3.4
Data Analysis
Several techniques of analysis might be appropriate to analyze the data gathered from the
interviews, i.e. “to turn the data into information that in turn can serve to develop concepts
theories, explanations, or understanding” (Crowther and Lancaster, 2009; p. 176). Three key
steps, which are common to any qualitative data analyzing technique, are: data reduction,
data display, and conclusion drawing and verification (Crowther and Lancaster, 2009). The
first step in analyzing qualitative data is the process of selecting, focusing simplifying,
abstracting, and transforming data. This is done by identifying and organizing the data into
clear patterns. Data display is the next step in analyzing qualitative data. This involves the
presentation of the data in ways which enable others to assess, interpret, and evaluate the
interpretations and conclusions that are drawn by the researcher. The final step of analyzing
qualitative data is to draw definitive conclusions from the data. Final conclusions will evolve
from the complete analysis of the data and thus after the first two steps. Similar to step 2,
the process of how conclusions were drawn and verified should be open for others to
evaluate. These key steps improve the validity and reliability of the research.
36
4.
Results
This chapter will provide an overview of the most important findings from the interviews.
The goal is to evaluate whether the findings from the literature review are also applicable for
the field of R&D portfolio management. First of all, the research design will be confirmed in
section 4.1. The two section hereafter will reveal insights into the R&D portfolio
management process and the new product forecasting process. Finally, section 4.4 will
provide the results concerning the factors that influence new product data quality.
4.1
Confirmation of Research Design
In section 1.3.2.1 “Is New Product Data Quality Important?” it is argued that new product
data quality is of great importance to R&D portfolio managers. According to the literature
the following three reasons account for this. Firstly, R&D portfolio management decisions
based on high quality data have a better chance of advancing the business’s goals. Secondly,
new product data quality has a positive relation with the intention to use the results from
portfolio analyses. Thirdly, R&D portfolio managers will be more satisfied with the portfolio
management information system when new product data is of high quality. As will be
discussed in the next paragraph, the results from the interviews clearly support the
importance of new product data quality.
New product data quality is considered as very important for all four companies. The
reasoning behind this crucial statement is unanimously stated as follows: “the quality of R&D
portfolio decisions heavily depend on new product data quality”. This means that better
decisions are made in case of high quality new product data and decisions based on low
quality data are frequently bad decisions in hindsight. In the long term new product data
quality is recognized as key to the company’s return on innovation investments.
Clearly, R&D portfolio managers are more satisfied with the portfolio management
information system when new product data is of high quality, since it helps them to make
the right decisions in an efficient and effective matter. While user satisfaction thus has a
positive relation with data quality, the intention to use the results from portfolio analyses
has no clear relation with new product data quality. This is because portfolio management is
seen as an important framework for innovation management. Not doing portfolio
management at all is therefore even worse than doing portfolio management with uncertain
new product data.
The representative from company A also mentioned that new product data quality provides
the opportunity to compare projects in the portfolio that are at different stages of
completion more fairly. It is known that projects in an early development stage often suffer
from poor data quality. Therefore, new product data quality should be made explicit so that
it can be taken into account when comparing projects with different data quality.
In section 1.3.2.2 “What is Quality?” the concept of data quality is explained with help of a
framework that captures the aspects of data quality that are important to data consumers.
This framework consists of twenty data quality dimensions which are sorted into the
37
following four categories: (1) intrinsic data quality, (2) contextual data quality,
(3) representational data quality, and (4) accessibility data quality. For various reasons (see
p. 19) this research focuses only on intrinsic data quality. The results from the interviews
support this choice. All four interviewees agreed that the four intrinsic data quality
dimensions (believability, accuracy, objectivity, and reputation) represent the most
important aspects of new product data quality. Moreover, none of the interviewees
suggested any other important aspects of new product data quality.
4.2
R&D Portfolio Management
Insight in the R&D portfolio management practice has been gained by asking which R&D
portfolio management best practices, as publicized by Cooper et. al (1998), are actually
followed. The results from the interviews indicate that all four companies conduct R&D
portfolio management, but at company A and D this business process is still in its infancy.
According to the interviewees the R&D portfolio management process has no direct
influence on new product data quality. Then again, this business process does have a direct
influence on R&D portfolio information quality and decision quality. Table 5 provides an
overview of which best practices are followed by company A, B, C and D.
All companies handle portfolio decisions at meetings in which managers from different
departments (marketing, controlling/finance, research & development, business unit)
discuss projects as a group, use their best judgment and make decisions. This result is
supported by the answers to the question: “to what degree are formal reviews are held?”.
Here the interviewees pointed out that formal reviews, a measure of process formality, are
held as planned. The frequency of these meetings ranges from four to twelve times a year
between companies.
As shown in table 5 the remaining R&D portfolio management best practices are not
followed by company A and D. On the contrary, company B and C do recognize portfolio
management as an important task in business, and the top management of these companies
are involved in this process. In addition, all R&D projects are considered together and
treated as a portfolio. It should be noted that companies create multiple portfolios. For
example, company B creates a portfolio of R&D projects for each product line and at
company C all R&D projects from a certain business unit are treated as a portfolio.
At company C a combination of portfolio methods is used which include financial methods,
the technology/market matrix, the innovation funnel, and the risk-reward diagram.
However, the focus is mainly on financial techniques, since this is seen as the only objective
way to compare very different projects. In contrast, company A, B and D only use financial
methods (e.g. return on investment and net present value) to make project selection
decisions. Moreover, It should be noted that at company B eighty percent of the projects are
driven by the European Union’s legislation, which means that most of the projects need to
be done in order to meet certain emission standards.
Finally, all companies are currently working on improving the portfolio management process.
Major efficiency and effectivity gains are expected from improving the information
38
technology that is used to support decision making. For example, company A and B are
currently implementing a new portfolio management information system and at company C
a ‘standard’ approach is being developed to manage their innovation portfolio. Remarkably,
none of the interviewees indicate that the company has an explicit, established method for
portfolio management. As a result there is also no method that is consistently applied across
all appropriate projects. These results are supported by the answers to the question: “to
what degree are rules and procedures formalized via documents?”. Here the interviewees
pointed out that rules and procedures are only somewhat formalized via documents.
A
Portfolio management is recognized as an important task in
the business.
The company has an explicit, established method for
portfolio management.
Management buys into the method, and supports it
through their actions.
The method has clear rules and procedures.
All projects are considered together and treated as a
portfolio.
The method is consistently applied across all appropriate
projects.
A combination of different portfolio management methods
is used.
Portfolio decisions are handled at meetings in which
managers discuss projects as a group, use their best
judgment and make decisions.
Table 5: R&D portfolio management best practices
4.3
B
C






D





New Product Forecasting
Insight in the new product forecasting process has been gained by asking which new product
forecasting best practices are followed. The results are shown in table 6. Contrary to the
R&D portfolio management process this business process has a direct influence on new
product data quality, which in turn, influences R&D portfolio information quality and
decision quality.
At all four companies new product forecasting is used to facilitate business planning. This
activity already starts in the first stage of the innovation process (stage 1: scoping in figure 4,
p. 5). Access to relevant information is enabled across functional areas. All companies share
new product data on shared drives. However, access is restricted.
At company A and C an overall organizational process and structure is in place for new
product forecasting. This means that forecasts are prepared via a formal or routine process
with clear and precise instructions. Moreover, rules and procedures are formalized via
documents and formal reviews are held.
39
A cross-functional approach to new product forecasting is established by company A, B, and
C. All key stakeholders and experts from different departments are involved in the new
product forecasting process. This means that input is obtained from people in marketing,
controlling/finance, research & development, and the business unit.
Company A and C both formally evaluate forecasting performance. At company C this is
done by comparing the forecasted values with the actual values on a monthly basis. The
representative from company B pointed out that evaluating forecast performance is
considered as a great opportunity to improve forecasts. However, it is mostly disregarded
since it only delivers advantages on the long-term. Tasks that offer short-term wins go first.
Based on the evaluation of forecast performance it is possible to reward or penalize the
people that are accountable. This management technique is only used at company C. Here,
forecasting performance is formally rewarded by including forecast performance in
individual performance plans and reward structures. It should be noted that this does not
include any disincentives.
A
Forecasts are prepared via a formal/routine process with
clear and precise instructions.
Forecasting is used to facilitate business planning (business
plan and forecasts are intertwined and developed
together).
All key stakeholders/experts from different departments
are involved in the new product forecasting process (crossfunctional approach).
Forecasting performance is formally evaluated (e.g.
multidimensional metrics are established).
Forecasting performance is formally rewarded (e.g. forecast
performance is included in individual performance plans
and reward systems).
Access to relevant information is enabled across functional
areas.
Table 6: new product forecasting best practices
4.4
B

C
D















Factors Influencing New Product Data Quality
In total eighteen factors that might affect new product data quality have been examined.
Representatives of the participating companies have rated, on a scale from one to five, to
what degree new product data quality is influenced by each factor. Fifteen of these factors
are based on the literature. The remaining three factors are based on suggestions from the
interviewees. Next each factor will be briefly described.
First of all, closeness to the core business is positively related to new product data quality.
The reasoning behind this statements is that a company has the most expertise on its main
activities. For example, the marketing department of Océ has a clearer picture of the copying
systems market than any other market outside Océ’s core business. Secondly, performing a
40
sanity check improves new product data quality. A sanity check is a basic test to quickly
evaluate whether data can possibly be true. This way illogical or inconsistent values can
easily be filtered out. Thirdly, new product data believability can be enhanced by
mathematically adjusting the data values of the integral portfolio. This implies that a
correction factor is used to adjust for the deviation of the actual value from the forecasted
value of the integral portfolio. This correction factor can be calculated from past data values.
Based on the level of influence on new product data quality factors are clustered into
groups. Here, two levels of factor influence are distinguished: ‘high influence’ and ‘low
influence’. To be more specific, factors that are rated as 4 out of 5 or 5 out of 5 are
considered as high influence factors. The remaining factors are considered as low influence
factors. In the following sub-sections it will be explained how each of the four groups is
formed.
Group 1: Unanimous Support
The first group consists of six factors which have been rated as ‘high influence’ by all
interviewees. Thus, all representatives from the participating companies are in complete
agreement that the six factors in table 7 have a great influence on new product data quality.
Factor
Process Formality
Description
The existence of an overall organizational process and structure
for new product forecasting improves new product data quality.
Over Innovation
New product data quality improves over the phases of the
Phases (Time)
innovation process.
Closeness to the Core
Closeness to the core business is positively related to new
Business
product data quality.
Agreement over
Believability of new product data is enhanced if different sources
Sources
agree on the data values.
Decomposition
Accuracy improves if the new product forecasting problem is
decomposed into sub-problems.
Combination of
Objectivity and accuracy of new product forecasts are enhanced if
Independent Forecasts several (independent) forecasts are combined.
Table 7: Unanimous support
Group 2: Mixed Support
The five factors in table 8 form the group called mixed support (second group). These factors
have a great influence on new product data quality according to three of the four
interviewees. Thus, each factor in this group has been rated as ‘low influence’ by one of the
representatives from the participating companies.
Factor
Measure Forecast
Performance
Assumption
Description
By measuring new product forecast performance sources of errors
can be isolated and targeted for improvement.
To ensure high quality new product data assumptions and their
41
Management
Early cross-functional
expert/stakeholder
involvement
Consistency with Past
Values (Reference Case
Forecasting)
Temporality of Data
Sanity Check
metrics need to be generated, translated, and tracked.
Early involvement of all key stakeholder/experts from different
improves new product data quality.
Believability of data is enhanced if data is consistent with past
data values (e.g. reference case forecasting).
Proximity of transaction time to valid time is positively related to
data believability.
Performing a sanity check, a basic test to quickly evaluate
whether data can possibly true, improves new product data
quality.
Table 8: Mixed support
Group 3: Little support
The five factors in table 9 form the group called little support (third group). Only one or two
interviewees believed that these factors have a great influence on new product data quality.
Thus, each factor in this group has been rated as ‘high influence’ by only one or two
representatives from the participating companies.
Factor
Interval/Probability
Forecasts
Reward Structures
Trustworthiness of
Source
Forecasting Strategy
Correction Factor
Description
New product data quality improves by making the uncertainty of
the input explicit (e.g. interval or probability forecasts.
Reward structures that include disincentives for not meeting
forecasts will improve new product data objectivity.
Trustworthiness of the data is positively related to data
believability.
The new product forecasting strategy, linking forecasting
techniques to type of new product, influences new product data
quality.
New product data believability can be enhanced by
mathematically adjusting the data values of the integral portfolio.
Table 9: Little support
Group 4: Low Support
The final group, low support, consists of one factors which have been rated as ‘low
influence’ by the interviewees. Thus, the representatives from the participating companies
are in complete agreement that the factor in table 10 has little influence on new product
data quality.
Factor
Mechanical Methods
Description
By using mechanical methods to process information new product
data quality improves.
Table 10: Low support
42
It is remarkable that the representatives from the participating companies are in complete
agreement that mechanical methods have little influence on new product data quality. On
the contrary, according to the literature mechanical models (i.e. computerized models)
improve the reliability of information processing by substituting an analytical process for an
intuitive one (Stewart, 2001). Thus, highly analytical forecasting processes will generally
result in more reliable forecasts that those that are highly intuitive. Moreover, these models
can help focus discussions and serve as a foundation for effective decision making. Based on
the comments from the interviewees, a possible explanation for this inconsistency between
practice and theory is that computerized models can be conceived as a “black box”. In such a
black box model the core relationships and key assumptions cannot be understood by the
users of these models. A lack of clear understanding of the drivers of a computerized model
tempts portfolio managers to substitute an intuitive process for an analytical process (i.e.
the use of computerized models).
43
5.
Discussion
5.1
Conclusions
5.1.1
Conceptual Model
In the conclusion of the literature review (chapter 2) a conceptual model was proposed
which suggested a conceptualization of possible factors that might affect new product data
quality. Based on the results from the interviews this theory-based model has been
restructured. The restructured model is presented in figure 12. Next, each of its components
as well as their interdependencies will be described.
New Product
Forecasting
Process
+
Intrinsic
New Product
Data Quality
R&D Portfolio
Management
Process
+
R&D Portfolio
Information and
Decision Quality
+
Figure 12: conceptual model
New Product Forecasting Process – In this study the independent variable, new product
forecasting process, consists of eighteen factors. All these factors have an influence on new
product data quality. However, the degree to which data quality is influenced differs
between factors. Based on the level of influence on new product data quality factors are
clustered into the following four groups: unanimous support, mixed support, little support,
and low support. A more detailed overview of these groups can be found in table 7, 8, 9, and
10.
Intrinsic New Product Data Quality – The dependent variable in this exploratory study is
intrinsic new product data quality. Based on four interviews with industry experts it seems
plausible that the four intrinsic data quality dimensions (believability, accuracy, objectivity,
and reputation) represent the most important aspects of new product data quality. When
looking at interdependencies, the results of this study show that new product data quality
has a positive effect on R&D portfolio management information and decision quality. In
other words, better decisions are made in case of high quality new product data. Based on
this research no evidence can be provided whether this relationship is significant or not.
R&D Portfolio Management Process – R&D portfolio management is a decision process with
high procedural rationality and is concerned with the analysis of portfolios containing a set
of R&D projects, technology, and new product efforts currently funded and underway. This
business process involves the collection of new product data and subsequently this
44
information is analyzed. While the R&D portfolio management process as a variable was
beyond the scope of this research, the results from the interviews with four industry experts
show that this process has a direct positive influence on R&D portfolio information and
decision quality. Then again, this conclusion should be considered as indicative and tentative
since no evidence can be provided whether this relationship is significant or not.
R&D Portfolio Information and Decision Quality – R&D portfolio information quality refers to
the quality of the information that results from the portfolio analyses. Under the assumption
that the management of businesses see the R&D portfolio management process as a rational
decision making process, R&D portfolio information quality has a positive relationship with
R&D portfolio decision quality. In other words, better decisions are made in case of high
quality portfolio information.
Firm Factors – The hypothesis that the relationship between the independent variable (new
product forecasting process) and the dependent variable (intrinsic new data quality) is
influenced by firm factors, or characteristics of a firm is not supported. This study has not
found any evidence to support this mediating relationship between the new product
forecasting process and intrinsic new product data quality. Therefore, this variable has been
deleted from the model.
5.1.2
Assessing versus Improving Data Quality
The main objective of this study is to develop an approach to assessing and improving new
product data quality. Therefore, the eighteen factors which are part of the independent
variable are sorted into two groups in table 11. In total there are nineteen factors listed in
the table. This is because one of the factors is part of both groups, since it may serve to
assess and to improve new product data quality. To be more specific, consistency with past
values (reference case forecasting) is split up in two separate factors. As will be explained in
the next paragraphs, consistency with past values can be used to assess data quality and
reference case forecasting can be used to improve data quality.
The first group consists of seven factors which can be used to assess new product data
quality. In other words, these factors serve to evaluate or estimate the quality of new
product data. The first two factors in the column ‘assess data quality’ in table 11, over
innovation phases (time) and closeness to the core business, are characteristics of an R&D
project, technology, or new product effort. Therefore, these factors can only serve to assess
data quality.
The believability of new product data depends on its origin (source) and consequent
processing history. The factors agreement over sources, temporality of data, consistency
with past values, trustworthiness of source, and sanity check are all sub-dimensions of new
product data believability. These dimensions have been derived from the believability
45
assessment of Prat and Madnick (2007), an approach to measure data believability. This
means that the aforementioned factors are specifically suitable to assess data quality.
The second group consists of twelve factors which can be used to improve new product data
quality. All factors in the column ‘improve data quality’ in table 11 are methods/activities
that focus on making forecasts better by changing the way forecasting is done. For example,
requiring justification of forecasts (assumption management) and decomposition methods
both make the foresting process more analytical. Moreover, these methods offer a
structured framework for the analysis and leave an audit trail that can be examined later to
retrace the arguments behind the judgments.
Process formality, or the degree to which rules, policies, and procedures govern the work
activities, also changes the way forecasting is done. The existence of an overall organization
process and structure for new product forecasting increases forecasting effectiveness
because it provides sense of structure and sequence to the work. Moreover, rules and
reviews provide motivation and allow personnel to assess their work activities and progress.
Another example worth mentioning is reference case forecasting. Reference case forecasting
is a systematical forecasting method that uses distributional information that is available
from comparable past initiatives. It is more than merely evaluating the extent to which the
data values are consistent with past data values.
Level of Support
Unanimous Support
Assess Data Quality
Over Innovation Phases (Time)
Closeness to Core Business
Agreement over Sources
Improve Data Quality
Process Formality
Decomposition
Combination of Independent
Forecasts
Mixed Support
Temporality of Data
Early Cross-Functional
Expert/Stakeholder Involvement
Consistency with Past Values
Measure Forecast Performance
Sanity Check
Assumption Management
Reference Case Forecasting
Little Support
Trustworthiness of Source
Forecasting Strategy
Reward Structures
(w/disincentives)
Interval/Probability Forecasts
Correction Factor
Low Support
Mechanical Methods
Table 11: assessing and improving new product data quality
46
5.1.3
Factor Relations
This paragraph focuses on interactions or connections between factors. For example,
forecasters can only be rewarded for excellence if the company has (1) systems in use for
measuring performance, (2) tools for providing feedback, and (3) standards and targets for
forecasting performance (Moon et. al, 1998).
In order to get a clear overview of the relations between factors a confrontation matrix has
been made. This matrix is comprised of all factors plotted against each other in a rectangular
format. Moreover, the relationships between these factors are plotted in this matrix. These
relationships are based on the literature review and the interviews. The confrontation matrix
can be found in appendix F.
Actually, the confrontation matrix represents a factor network, since it is composed of
factors and the interactions or connections between these factors (Brandes and Erlebach,
2005). In order to draw conclusions from this factor network a network analyses is
conducted with UCINET. UCINET is a network analysis software program. A particular
interesting network measure is degree centrality, a measure of network activity. For nonsymmetric data the in-degree of a factor is the number of ties received by that factor and
the out-degree is the number of ties initiated by that factor (Freeman, 1979). For this study
out-degree centrality is the most interesting measure of network activity, since this
calculation best reflects the notion of factor dependency. In other words, the out-degree of
a factor represents how many other factors depend on, or are influenced by, that factor. The
table which contains a list of the out-degree centralities measures of all factors can be found
in appendix G. Next, the three most active factors and their relationships with other factors
will be discussed. These relationships should be considered as indicative and tentative since
no evidence can be provided whether these relationships are significant or not.
5.1.3.1
Process Formality
Most likely, an overall organizational process and structure is needed to successfully forecast
new product data. This means that all related factors (e.g. assessing data quality, measuring
forecasts performance, combining independent forecasts and rewarding forecasters for
excellence) might need to be governed by rules, policies and procedures. For example, a
formal procedure should be used to combine forecasts in order to protect against people's
biases.
5.1.3.2
Assumption Management & Decomposition Methods
Assumption management and decomposition methods both interact or connect to the
factors ‘early cross-functional expert/stakeholder involvement’, ‘reference case forecasting’,
‘reward structures’ and ‘measure forecast performance’. Therefore, these two factors are
discussed together in this sub-section.
Assumption Management & Decomposition Methods  Early cross-functional
expert/stakeholder involvement – By using decomposition methods experts can focus on the
sub-problem that fits their expertise. Moreover, in a meeting experts/stakeholders can focus
47
on a pre-defined set of variables which may increase the effectiveness of the discussion.
Justification of forecasts ensures a healthy discussion about new product forecasts.
Assumption Management & Decomposition Methods  Reference case forecasting –
Identifying a relevant reference class of past, similar projects involves (1) weighing
similarities and differences on many variables and (2) determining which are the most
meaningful in forecasting. These two steps might be simplified when there is information
available about forecast assumptions. Moreover, decomposition methods provide a
structured framework to analyze similarities/differences on a pre-defined set of variables. In
this way, identifying a relevant reference class of past similar projects might be less
complicated.
Assumption Management & Decomposition Methods  Measure forecast performance –
Measuring and tracking forecast performance provides the opportunity to identify whether
changes in the development and application of forecasts are contributing, or hindering,
business success. In other words, forecast performance measures can provide support for
identifying sources of forecasting error. Decomposition methods in combination with
assumption management may help to identify opportunities for improvement because these
methods provide a structured framework to analyze forecast performance on a pre-defined
set of variables including the arguments behind the judgments.
Assumption Management & Decomposition Methods  Reward structures – The results
achieved by new products are subject to a wide variety of uncontrollable events. Therefore,
it is not always possible to assign accountability to the forecaster in the event that the
product does not live up to expectations. Decomposition methods in combination with
assumption management offer a structured framework for the analysis and leaves an audit
trail that can be examined later to retrace the arguments behind the judgments. This might
help to evaluate persons’ job performance more fairly.
5.1.4
R&D Portfolio Management
The results of this research project, with regard to the R&D portfolio management process,
indicate that the companies from this sample do not have an explicit, established portfolio
management process with clear rules and procedures which is consistently applied across all
appropriate projects. Different divisions within these companies use different methods for
R&D portfolio management and even some divisions do not conduct portfolio management
at all. According to Cooper et. al (1999) the lack of a consistently applied, explicit portfolio
management process has a major detrimental impact on portfolio performance.
It can also be concluded that all four companies rely on financial models and methods as the
dominant portfolio decision tool. Remarkably, only one of the companies uses multiple
methods for portfolio management. From the literature it is known that there is no single
method to make good project selection and/or ranking decisions. The following two reasons
48
account for this: (1) all methods are somewhat unreliable and (2) a single method cannot
provide enough insight, since the goals of portfolio management are wide-ranging.
With regard to decision making style it can be concluded that group decision making
dominates in R&D portfolio management. All four companies handle portfolio decisions at
meetings in which managers discuss projects as a group, use their best judgment and make
decisions.
So far, this conclusion has been focused on similarities between companies. However, there
are also clear differences. First of all, the management of company B and C view portfolio
management as an important task in the business, while the management of company A and
D do not. Secondly, the management of company B and C supports the portfolio
management process, while the management of company A and D do not. This is remarkable
since the results show that all four companies lack a explicit, established portfolio
management process. Thirdly, only company B and C consider all projects together and treat
these projects as a portfolio.
5.2
Managerial Implications
In addition to the theoretical contributions described, this study has provided new insights
for practical business management. The following sub-sections will give an overview of key
aspects to improve R&D portfolio information and decision quality.
5.2.1
R&D Portfolio Management
Formalize the portfolio management process. By developing and implementing a
systematic, explicit portfolio management process R&D portfolio information and decision
quality can be enhanced. This means there should be an overall organizational process and
structure for portfolio management, i.e. a process with clear rules, procedures and reviews.
Moreover, the best results are achieved if this process is applied consistently across all
appropriate projects and if the portfolio management process is supported by senior
management. This also implies that senior management needs to be convinced of the
importance of R&D portfolio management.
Combine different portfolio methods. A combination of different portfolio management
methods can improve R&D portfolio information and decision quality. This means that
companies should not over-rely on financial portfolio selection methods. Instead companies
should use several portfolio selection methods concurrently. For example, a combination of
financial methods, strategic approaches, and bubble diagrams. Portfolio management
information systems could help to convert new product data into the required information
in an efficient way.
49
5.2.2
New Product Data Quality
Formalize the new product forecasting process. By developing and implementing a
systematic, explicit new product forecasting process R&D portfolio information and decision
quality can be enhanced. In other words, there should be an overall organizational process
and structure new product forecasting, i.e. a process with clear rules, procedures and
reviews. This means that all related factors, as identified in this study (e.g. assessing data
quality, measuring forecasts performance, combining independent forecasts and rewarding
forecasters for excellence), need to be governed by rules, policies and procedures.
Decompose the new product estimation problem. The new product estimation problem
should be decomposed into sub-problems that can be more easily or confidently estimated,
and then aggregating these sub-problems based on sound rules. For example, the sales
revenue forecast can be decomposed into a market size forecast, a market share forecast,
and a product price forecast. Not only does this improve forecasting accuracy,
decomposition methods can also help to improve forecasting reliability since these methods
offer a structured framework for the analysis. Moreover, decomposition methods leave an
audit trail that can be examined later to retrace the arguments behind the judgments.
Require justification of forecasts. Another way to improve the reliability of information
processing is by requiring justification of forecasts. Requiring justification of forecasts will
make the forecasting process more analytical. In other words, new product forecasting
should be viewed as a process of assumption management, which involves the generation,
translation, and tracking of assumptions and their metrics.
Perform a sanity check. A relatively quick way to improve new product data quality is by
performing a sanity check. A sanity check is a basic test to quickly evaluate whether data can
possibly be true. This way illogical or inconsistent values can easily be filtered out. The best
results are achieved if this test is conducted by a group of experts. Additionally, the
assessment of new product data quality can be based on project characteristics and data
provenance, i.e. information that helps determine the data’s source and consequent
processing history.
Measure and track forecast performance. Measuring and analyzing forecast performance
has a positive effect on new product data quality. It provides the opportunity to identify
whether changes in the development and application of forecasts are contributing, or
hindering, business success. In other words, by establishing multidimensional metrics to
measure forecast performance sources of errors can be isolated and targeted for
improvement. It is also important to track performance at each point at which forecasts may
be adjusted. Moreover, forecasting performance measures can be incorporated into job
50
performance evaluation criteria in order to increase new product data objectivity i.e. reduce
forecast biases due to optimism and advocacy.
Take an “outside view”. By using all the distributional information that is available from
comparable past initiatives (i.e. taking an “outside view”) new product data objectivity can
be enhanced. Reference class forecasting is a method for systematically taking an outside
view on forecasting problems. This method consists of the following three steps: (1) Identify
a relevant reference class of past, similar projects, (2) establish a probability distribution for
the selected reference class, and (3) compare the specific project with the reference class
distribution, in order to establish the most likely outcome for the specific project.
Combine multiple independent forecasts. New product forecasting accuracy and objectivity
can be substantially improved through the combination of multiple independent forecasts.
Independent forecasts can be generated in the following three ways: (1) by analyzing
different data, (2) by using different forecasting methods, or (3) by using independent
forecasters. Furthermore, when sufficient resources are available, it is best to combine
forecasts from at least five methods.
Use a cross-functional forecasting approach. New product data quality can be improved by
involving all key stakeholders/experts from different departments in new product
forecasting. Companies forecast most effectively when a cross-functional forecasting
approach is used. This means that input is obtained from people in different functional
areas, each of whom contributes relevant information and insights that can improve overall
accuracy.
Focus on most influential data. It is advised to first focus on improving new product data
which is of greatest concern in terms of data quality and impact on the business case. The
impact on the business case can be tested with a sensitivity analysis, a technique for
systematically changing parameters in the business model to determine the effects of such
changes.
Indicate data quality. By explicitly indicating new product data quality R&D portfolio
information and decision quality can be enhanced. This is because projects with different
data quality can be compared more fairly.
Improve the quality of execution of the predevelopment activities. Obviously, new product
data quality is directly affected by the quality of execution of the predevelopment activities
(e.g. preliminary market assessment, preliminary technical assessment, market study,
business analysis, and financial analysis). Therefore, investing sufficient time and resources
in these activities is of critical importance.
51
5.3
Academic Implications & Further Research
This research project has discovered a clear gap in the academic literature. To be more
specific, the role of new product data quality in R&D portfolio management is quite new to
the academic literature. Existing studies on new product data quality are rare and limited to
forecasting accuracy, which is only a small part of the multidimensional concept of data
quality. Therefore, based on past research efforts, it is unclear which factors can be used to
assess or improve new product data quality. This study proposed a new way of looking at
R&D portfolio management by formalizing new product data quality. The importance of
research on this topic is acknowledged by four industry experts on R&D portfolio
management. These experts consider new product data quality as very important since the
quality of R&D portfolio decisions heavily depend on new product data quality. In the long
term new product data quality is key to the company’s return on innovation investments.
This study constitutes a starting point in considering the different factors that affect new
product data quality. Moreover, the interactions or connections between these factors are
hypothesized. Therefore, future research could focus on validating the current findings with
a more descriptive and quantitative research approach.
Secondly, this study is completely focused on intrinsic data quality, while according to Wang
and Strong (1996) data quality consists of three more categories (i.e. contextual,
representational, and accessibility data quality). This means that in total sixteen data quality
dimensions have not been taken into account. Therefore, future research could focus on the
role of these three data quality categories in R&D portfolio management. Interesting
research questions could be: “How are contextual, representational, and accessibility new
product data quality influenced?”, or “To what degree is R&D portfolio management
information and decision quality influenced by contextual, representational, and accessibility
new product data quality?”.
5.4
Limitations
This research has a number of limitations that must be acknowledged. These limitations
imply that the findings should be viewed with caution.
First, the sample used for this study is small. Although research on a sample size of four can
be analyzed, its results are less generalizable to the overall population. Therefore, the
conclusions from this research only give an indication of the relationships between (1) the
new product forecasting process and intrinsic new product data quality, (2) intrinsic new
product data quality and R&D portfolio information and decisions quality, and (3) the R&D
portfolio management process and R&D portfolio information and decision quality. This also
means that these relationships are only valid for the sample on which the analysis has been
conducted.
Another limitation is that the aforementioned relationships should be considered as
indicative and tentative since no evidence can be provided whether these relationships are
52
significant or not. Even more so, because in the literature review no (re)usable empirical
evidence on significant relationships has been found.
The third limitation is related to the method with which most of the data has been gathered,
the semi-structured interview. When interviewing persons, it is possible that they present
themselves, and the company they work for, in a more acceptable manner than is actually
true. The probability of such a social-desirability bias (King & Bruner, 2000) may have
influenced the participants answers on the questions about the R&D portfolio management
and new product forecasting best practices.
Despite these limitations, the results of this study will be of great interest and value to
academics as well as practitioners in the field of R&D portfolio management.
53
References
Adams-Bigelow, M. (2005), First results from the 2003 comparative performance assessment
study (CPAS) in the PDMA handbook of new product development (second edition), New
Jersey: John Wiley & Sons.
AIM global data and risk management survey (2005)
URL: http://www.aimsoftware.com/studie-neu/results-2005/results_2005/AIM_Global_Data
and_Risk_Management_Survey_2005_Full_report.pdf [Retrieved July 5, 2010]
Ansoff, I. (1957), Strategies for Diversification, Harvard Business Review, Vol. 35 Issue 5,
pp.113-124.
Armstrong, J.S. (Ed.), Principles of forecasting: a handbook for researchers and practitioners,
Boston: Kluwer Academic Publishers.
Armstrong, J.S. (2001), Combining forecasts, in Principles of forecasting: a handbook for
researchers and practitioners, Boston: Kluwer Academic Publishers.
Ausura, B., Gill, B. and Haines, S. (2005), Overview and context for life-cycle management, in
the PDMA handbook of new product development (second edition), New Jersey: John Wiley
& Sons.
Bates, J.M. and Granger, C.W.J. (1969), The combination of forecasts, Operations Research
Quarterly, Vol. 20, pp. 451-468.
Black and Scholes (1973), The pricing of options and corporate liabilities, Journal of political
economy.
Booz, Allen & Hamilton (1982), New Product Development in the 1980s, New York: Booz,
Allen & Hamilton.
Brandes, U. and Erlebach T. (2005), Network analysis: methodological foundations, Berlin
Heidelberg: Springer-Verlag.
Brown S.L. and Eisenhardt K.M. (1995), Product development: past research, present
findings, and future directions, Academy of Management Review, Vol. 20, No. 2, 343-378.
Business Dictionary
URL:
http://www.businessdictionary.com/definition/garbage-in-garbage-out-GIGO.html
[Retrieved July 5, 2010]
Business Dictionary
URL: http://www.businessdictionary.com/definition/best-practice.html [Retrieved June 16,
2010]
54
Carbonell-Foulquié, P., Munuera-Alemán, J.L. and Rodríquez-Escudero, A.I. (2004), Criteria
employed for go/no-go decisions when developing successful highly innovative products,
Industrial Marketing Management, Vol. 33, 307-316.
Chandler, A.D., Jr. (1969), Strategy and structure: chapters in history of the American
industrial enterprise, Massachusetts: MIT Press.
Chatfield, C. (1993), Calculating interval forecasts, Journal of Business & Economic Statistics,
Vol. 11, No. 2, pp. 121-135.
Chesbrough, H.W. (2003) Open innovation: the new imperative for creating and profiting
from technology, Boston: Harvard Business School Press.
Chesbrough, H.W., Vanhaverbeke, W., and West, J. (2006), Open innovation: researching a
new paradigm, Oxford: Oxford University Press.
Clemen, R.T. (1989), Combining forecasts: a review and annotated bibliography,
International Journal of Forecasting, Vol. 5, pp. 559-583.
Cooper, R.G. (1990), Stage-gate systems: a new tool for managing new products, Business
Horizons, May-June.
Cooper, R.G. (2006), Formula for success in new product development.
URL: http://www.stage-gate.com/downloads/Formula_for_Success_in_New_Product
Development.pdf [Retrieved April 13, 2010]
Cooper, R.G., and Edgett, S.J. (1986), An investigation into the new product process: steps,
deficiencies, and impact, Journal of Product Innovation Management, Vol. 3, 71-85.
Cooper, R.G., and Edgett, S.J. (2007), Ten ways to make better portfolio and project selection
decisions.
URL: http://www.planview.com/docs/Planview-Stage-Gate-10-Ways.pdf, [Retrieved April 13,
2010]
Cooper R.G., Edgett, S.J. and Kleinschmidt, E.J. (1998), Best practices for managing R&D
portfolios, Research and Technology Management, Vol. 41, No. 4, 20-33.
Cooper R.G., Edgett S.J., and Kleinschmidt E.J. (1999), New product portfolio management:
practices and performance, The Journal of Product Innovation Management, Vol. 16, 333351.
Cooper R.G., Edgett, S.J. and Kleinschmidt, E.J. (2000), New problems, new solutions: making
portfolio management more effective, Research and Technology Management, Vol. 43,
No. 2.
55
Cooper R.G., Edgett, S.J. and Kleinschmidt, E.J. (2005) Portfolio management: fundamental
to new product success in the PDMA handbook of new product development (second
edition), New Jersey: John Wiley & Sons.
Cooper, D.R. and Schindler, P.S. (2006), Business research methods (ninth edition), New
York: McGraw-Hill.
Crowther, D., Lancaster, G. (2008). Research methods: A concise introduction to research in
management and business consultancy (second edition). Amsterdam: Elsevier Butterworth
Heinemann.
DAF (2010), DAF Trucks N.V.: driven by quality.
URL: http://www.daf.com/EN/About-DAF/Pages/The-Company.aspx [Retrieved September
27, 2010]
Dean, J.W. and Sharfman M.P. (1996), Does Decision Process Matter? A study of Strategic
Decision-Making Effectiveness, The Academy of Management Journal, Vol. 39, No. 2, 368396.
DeLone W.H. and McLean E.R. (1992), Information systems success: the quest for the
dependent variable, Information systems research, Vol. 3, No. 1, 60-93.
DeLone W.H. and McLean E.R. (2003), The DeLone and McLean model of information
systems success: a ten year update, Journal of Management Information Systems, Vol. 19,
No. 4, 9-30.
DSM (2010), About DSM: company profile.
URL: http://www.dsm.com/en_US/html/about/dsm_company_profile.htm [Retrieved
September 27, 2010]
EIRMA (2002), Project portfolio management: selecting projects and allocating resources to
meet business needs, EIRMA working group report number 59
URL: http://www.eirma.asso.fr
Eisenhardt, K.M. and Zbaracki, M.J. (1992), Strategic decision making, Strategic Management
Journal, Vol. 13, pp. 17-37.
Elbanna, S. and Child, J. (2007), Influences on strategic effectiveness: development and test
of an integrative model, Strategic Management Journal, Vol. 28, pp. 431-453.
Elton, E.J. and Gruber, M.J. (1997), Modern portfolio theory 1950 to date, Journal of Banking
& Finance 21, 1743-1759.
Faems, D., van Looy, B., and Debackere, K. (2005), Interorganizational collaboration and
innovation: toward a portfolio approach, Journal of Product Innovation Management, Vol.
22, 238-250.
56
Flyvbjerg, B. (2007a), How optimism bias and strategic misrepresentation in early project
development undermine implementation, Concept Program, The Norwegian University of
Science and Technology, 2007), pp. 41-55.
Flyvbjerg, B. (2007b), Eliminating bias in early project development through reference class
forecasting and good governance, Concept Program, The Norwegian University of Science
and Technology, 2007, pp. 90-110.
Freeman, L.C. (1979), Centrality in social networks: conceptual clarification, Social Networks,
Vol. 1, No. 3, pp. 215-239.
Fredberg, T., Elmquist, M. and Ollila, S. (2008), Managing open innovation: present findings
and future directions, VINNOVA Report VR 2008:02.
Garcia, M.L. and Bray, O.H. (1997), Fundamentals of Technology Roadmapping, SAND970665. Albuquerque, NM: Sandia National Laboratories.
Griffin, A. (1997), PDMA research on new product development practices: updating trends
and benchmarking best practices, Journal of Product Innovation Management, Vol. 14, 429458.
Gustafsson, J. and Salo, A. (2005), Contingent portfolio programming for the management of
risky projects, Operations Research, Vol. 53, No. 6, pp. 946-956.
Hagedoorn, J. (1993), Understanding the rationale of strategic technology partnering:
interorganizational modes of cooperation and sectoral differences, Strategic Management
Journal, Vol. 14, 371-385.
Hagedoorn, J. (2002), Inter-firm R&D partnerships: an overview of major trends and patterns
since 1960, Research Policy, Vol. 31, 477-492.
Hart, S., Hultink, E. J., Tzokas, N. and Commandeur, R. (2003), Industrial Companies’
Evaluation Criteria in New Product Development Gates, the Journal of Product Innovation
Management, 20:22-36.
Hubbard, D.W. (2009), The failure of risk management: why it's broken and how to fix it,
New Jersey: John Wiley & Sons.
Hustad, T.P. (1996), Reviewing current practices in innovation management and a summary
of selected best practices in the PDMA handbook of new product development, New Jersey:
John Wiley & Sons
Huston, L. and Sakkab, N. (2007), Implementing open innovation, Research-Technology
Management, Vol. 50, 21-25.
57
Igartua, J.I., Garrigós, J.A. and Hervas-Oliver, J.L. (2010), How innovation management
techniques support an open innovation strategy, Research-Technology Management, Vol.
53, No. 3, pp. 41-52.
Jaruzelksi B., Dehoff K., and Bordia R. (2005), The Booz Allen Hamilton Global Innovation
1000: Money isn’t everything, Strategy+business, Vol. 41.
Jaruzelksi B., Dehoff K., and Bordia R. (2007), The Booz Allen Hamilton Global Innovation
1000: The customer connection, Strategy+business, Vol. 49.
Kahn, K.B. (2001), An exploratory investigation of new product forecasting techniques, The
Journal of Product Innovation Management, Vol. 19, 133-143.
Kahn, K.B. (2006), New product forecasting: an applied approach, New York: M.E. Sharpe.
Kahn, B., K., Strong, D.M. and Wang, R.Y. (2002), Information quality benchmarks: product
and service performance, Communications of the ACM, Vol. 45, No. 4, 184-192.
King, M.F., Bruner, G.C. (2000), Social desirability bias: a neglected aspect of validity testing,
Psychology and Marketing, Vol. 17, No. 2, pp. 79-103.
Knight, F.H. (1921), Risk, uncertainty and profit (Reprints of economic classics)
URL: http://mises.org/books/risk_uncertainty_profit_knight.pdf [Retrieved July 5, 2010]
Koen, P.A. (2005), The fuzzy front end for incremental, platform, and breakthrough products
in the PDMA handbook of new product development (second edition), New Jersey: John
Wiley & Sons.
Krishnan, V. and Ulrich, K.T. (2001), Product development decisions: a review of the
literature, Management Science, Vol. 47, No. 1, 1-21.
Lamb, C.W., Hair, J.F. and McDaniel, C.D. (2008), Essentials of marketing (sixth edition),
Mason: South-Western Cengage learning.
Lovallo, D. and Kahneman, D. (2003), Delusions of success: how optimism undermines
executives' decisions, Harvard Business Review, July Issue, pp. 56-63.
Lawrence, M., Goodwin, P., O’Connor, M., and Önkal, D. (2006), Judgmental forecasting: A
review of progress over the last 25 years, International Journal of Forecasting, Vol. 22, 493518.
Lee Y., Pipino L., Funk J., and Wang R. (2006), Journey to data quality, MIT Press, Cambridge,
MA.
Lee, Y.W., Strong, D.M., Kahn, B.K. and Wang R.Y. (2002), AIMQ: a methodology for
information quality assessment, Information & Management, Vol. 40, 133-146.
58
Lint, O., and Pennings E. (1997), An option approach to the new product development
process: a case study at Philips Electronics, R&D Management, Vol. 31, No. 3, 163-172.
Longhurst, R. (2003). Semi-structured interviews and focus groups. In Clifford, N.J.,
Valentine, G.(eds) (2003) Key methods in geography (p. 117-132). London: Sage.
Lucey T. (2005), Management information systems (ninth edition), London: Thompson.
MacGregor, D.G., Decomposition for judgmental forecasting and estimation, In Armstrong,
J.S. (Ed.), Principles of forecasting: a handbook for researchers and practitioners, Boston:
Kluwer Academic Publishers.
Maringer, D. (2005), Portfolio management with heuristic optimization, Dordrecht: Springer.
Markowitz, H.M. (1952), Portfolio selection, The Journal of Finance, Vol. 7, No. 1, pp. 77-91.
Markowitz, H.M. (1959). Portfolio Selection: Efficient Diversification of Investments. New
York: John Wiley & Sons.
Mather, M. and Lee, M.A, U.S. labor force trends, Encyclopedia Britannica, 2008,
Encyclopedia
Britannica
Online,
29-03-2010,
URL:
http://www.britannica.com/bps/additionalcontent/18/33336709/US-Labor-Force-Trends
McGrath, M.E. (2004), Next generation product development: how to increase productivity,
cut costs, and reduce cycle times, New York: McGraw-Hill.
McGrath, R.G. (1997), A real options logic for initiating technology positioning investments,
Academy of Management, Vol. 22, No. 4, 974-996.
McGrath, R.G., Ferrier, W.J., and Mendelow A.L. (2004), Real options as engines of choice
and heterogeneity, Academy of Management, Vol. 29, No. 1, 86-101.
McGrath, M.E. (2001), Product strategy for high-technology companies (second edition),
New York: McGraw-Hill.
Mentzer, J.M. and Kahn, K.B. (1997), State of sales forecasting systems in corporate America,
The Journal of Business Forecasting, Spring, pp. 6-13.
Mentzer, J.M., Bienstock, C.C. and Kahn, K.B. (1999), Benchmarking sales forecasting
management, Business Horizons, May-June, pp. 48-56.
Mintzberg, H., Raisinghani, D. and Theoret, A. (1976), The structure of “unstructured”
decision processes, Administrative Science Quarterly, Vol. 21, No. 2, pp. 246-275.
Moon, M.A., Mentzer, J.T., Smith, C.D. and Garver, M.S. (1998), Seven keys to better
forecasting, Business Horizons, September-October, pp. 44-52.
59
Northrop, A., Kraemer, K.L., Dunkle, D. and King, J.L. (1990), Payoffs from computerization:
lessons over time, Public Administration Review, 505-514.
Océ (2010), Company information: organization.
URL: http://www.oce.co.uk/company/organization.aspx [Retrieved September 27, 2010]
O’Connor, P. (2004), Spiral-up implementation of NPD portfolio and pipeline management in
the PDMA toolbook 2 for new product development, New Jersey: John Wiley & Sons.
O’Connor, P. (2005), Implementing product development, in the PDMA handbook of new
product development (second edition), New Jersey: John Wiley & Sons.
Office of Commerce
URL: http://www.ogc.gov.uk/documentation_and_templates_business_case.asp [Retrieved
June 16, 2010]
Patterson, M.L. (2005), New product portfolio planning and management, in the PDMA
handbook of new product development (second edition), New Jersey: John Wiley & Sons.
Phaal, R., Farrukh, C.J.P. and Probert, D.R. (2004), Technology roadmapping – a planning
framework for evolution and revolution, Technological Forecasting & Social Change, No 71,
pp. 5 -26.
Philips (2010), Company profile: businesses.
URL: http://www.philips.com/about/company/businesses/index.page [Retrieved September
27, 2010]
Prat N. and Madnick E. (2007), Measuring data believability, MIT Sloan School Working Paper
4672-07.
Presans
(2009),
Golden
age
of
closed
innovation,
http://www.presans.com/news/golden-age-closed-innovation
29-03-2010,
URL:
Redman, T.C. (1998), The impact of poor data quality on the typical enterprise,
Communications of the ACM, Vol. 41, No. 2.
Rink, D.R. and Swan, J.E. (1979), Product life cycle research: a literature review, Journal of
Business Research, pp. 219-242.
Reeves C.A. and Bednar D.A. (1994), Defining quality: alternatives and implications, Academy
of Management: The Academy of Management Review, Vol. 19, No. 3, 419-445.
Rogers, E.M. (2003), Diffusion of innovations, 5th edition, New York: Free Press.
Roussel, Philip, Saad, Kamal, and Erickson, Tamara (1991), Third Generation R&D: Managing
the Link to Corporate Strategy, Boston, MA: Harvard Business School Press & Arthur D. Little
Inc.
60
Salo A.A. and Bunn, D.W. (1995), Decomposition in the assessment of judgmental probability
forecasts, Technological Forecasting and Social Change, Vol 49, 13-25.
Sanders, N.R. and Ritzman, L.P. (2004), Integrating judgmental and quantitative forecasts:
methodologies for pooling marketing and operations information, International Journal of
Operations & Production Management, Vol. 24, No. 5, pp. 514-529.
Schmidt, J.B., Sarangee, K.R. and Montoya, M.M. (2009), Exploring new product
development project review practices, The Journal of Product Innovation Management, Vol.
26, 520-535
Shim, J.P., Warkentin, M., Courtney, J.F., Power, D.J., Sharda, R. and Carlsson, C. (2002), Past,
present, and future of decision support technology, Decision Support Systems, Vol. 33, 111126.
Simmgan Y.L., Plale B., and Gannon D. (2005), A survey of data provenance in e-science,
SIGMOD Record, Vol. 34, No. 3, 31-36.
Stewart, T.R. (2001), Improving reliability of judgmental forecasts, in J.S. Armstrong (Ed.),
Principles of forecasting: a handbook for researchers and practitioners, Boston: Kluwer
Academic Publishers.
Tatikonda, M.V. and Montoya-Weiss, M.M. (2001), Integrating operations and marketing
perspectives of product innovation: the influence of organizational process factors and
capabilities on development performance, Management Science, Vol. 47, No. 1, pp. 141-172.
Teo, T.S.H. and Wong, P.K. (1998), An empirical study of the performance impact of
computerization in the retail industry, Omega – The International Journal of Management
Science, Vol. 26, No. 5, 611-621.
Tidd, J., Bessant, J., and Pavitt, K. (2005), Managing Innovation: Integrating Technological,
Market and Organizational Change, 3rd edition, Chichester: John Wiley & Sons Ltd.
Tyebjee T.T. (1987), Behavioral biases in new product forecasting, International Journal of
Forecasting, Vol. 3, 393-404.
Tzokas, N., Hultink, E.J. and Hart, S. (2004), Navigating the new product development
process, Industrial Marketing Management, No. 33, pp. 619-626.
Wang, R.Y. and Strong, D.M. (1996), Beyond accuracy: what data quality means to data
consumers, Journal of management information systems, Vol. 12, No. 4, 5-28.
Weatherson, B. (2006), Intrinsic vs. extrinsic properties, Stanford Encyclopedia of
Philosophy, URL: http://plato.stanford.edu/entries/intrinsic-extrinsic/ [Retrieved July 6,
2010]
61
Whalen, P.J. (2007), Strategic and Technology Planning on a Roadmapping Foundation,
Research Technology Management, May-June, pp. 40-51.
Wikipedia
http://en.wikipedia.org/wiki/Intrinsic_and_extrinsic_properties_(philosophy)
July 6, 2010]
Wikipedia
http://en.wikipedia.org/wiki/Forecasting [Retrieved July 5, 2010]
Wikipedia
http://en.wikipedia.org/wiki/Modern_portfolio_theory [Retrieved May 10, 2010]
62
[Retrieved
Appendix A: Definitions of Data Quality Dimensions
Dimensions
Accessibility
Definitions
The extent to which information is available, or easily and
quickly retrievable
Access security
The extent to which access to information is restricted
appropriately to maintain its security
Accuracy
The extent to which information is correct and reliable
Appropriate amount of data The extent to which the volume of information is appropriate
for the task at hand
Believability
The extent to which information is regarded as true and
credible
Completeness
The extent to which information is not missing and is of
sufficient breadth and depth for the task at hand
Concise representation
The extent to which information is compactly represented
Cost effectiveness
The extent to which the cost of data accuracy, and data
collection is cost-effective
Ease of operation
The extent to which information is easily joined, changed,
updated, downloaded/uploaded, aggregated, customized, and
integrated.
Ease of understanding
The extent to which information is easily comprehended
Flexibility
The extent to which information is easy to manipulate and
apply to different tasks
Interpretability
The extent to which information is in appropriate languages,
symbols, and units, and the definitions are clear
Objectivity
The extent to which information is unbiased, unprejudiced,
and impartial
Relevancy
The extent to which information is applicable and helpful for
the task at hand
Representational
The extent to which information is presented in the same
consistency
format
Reputation
The extent to which information is highly regarded in terms of
its source or content
Timeliness
The extent to which information is sufficiently up-to-date for
the task at hand
Value-added
The extent to which information is beneficial and provides
advantages from its use
Table A1: definitions of data quality dimensions. Adapted from Kahn, Strong and Wang
(2002), and Wang and Strong (1996).
63
Appendix B: Types of new products
Product Type
New to the World
New to the company
Description
New products that create an entirely new market
New products that, for the first time, allow a company
to enter an established market
Additions to existing product lines New products that supplement a company’s
established product lines
Improvements and revisions of New products that provide improved performance or
existing products
greater perceived value and replace existing products
Repositionings
Existing products targeted to new markets or market
segments
Cost reductions
New products that provide similar performance at
lower cost
Table B1: Types of new products.
64
Appendix C: Company Description
Bicore is a business service provider specialized in the topic of innovation management.
Bicore strives for quality, active involvement of alliance parties and innovative strength.
Mission
The Mission of Bicore is to improve the ROI, the return on Open Innovation in the high tech
industry in the Benelux. The high tech industry consists of the sectors ICT, Electronic &
Electrical Equipment, Automotive, and Life Sciences.
Vision
Bicore accomplishes this improved return on Open Innovation by providing practical
professional services based on our own content and methods for innovation and
collaboration management. This content is based on a combination of experience and
research. Bicore's commitment in client assignments is focused on business results.
Company history
Since mid-2007, Simbon and BO4 have intensified the cooperation due to our many linkages
in innovation expertise. BO4 has grown since 2002 into a renowned specialist in innovation
alliances, such as the World Class Maintenance consortium WCMC, and the hydrogen fuel
cell alliance WaterstofRegio. In the same period Simbon has developed into a leading
consultancy firm in innovation management for innovative high-tech corporations, including
Philips, Océ, and DAF Trucks. In 2008 we have decided to formalize the cooperation into a
full merger of all activities of Simbon and BO4.
Thanks to the trust Bicore has gained from their business relations, they now are a healthy
and growing company with 10 professionals. To finish this successful integration Bicore will
from now on use the new name: Bicore. Bicore is an abbreviation for "Business Innovation
Cooperation Results". Bicore focuses on business results via innovative cooperation. The
new size and scale allows them to further professionalize their methods and techniques.
65
Appendix D: Research Sample Descriptions
DSM
Royal DSM N.V. creates solutions that nourish, protect and improve
performance. Its end markets include human and animal nutrition and health,
personal care, pharmaceuticals, automotive, coatings and paint, electrical and
electronics, life protection and housing. DSM manages its business with a focus
on the triple bottom line of economic prosperity, environmental quality and
social equity, which it pursues simultaneously and in parallel. DSM has annual
net sales of about €8 billion and employs some 22,700 people worldwide. The
company is headquartered in the Netherlands, with locations on five
continents. DSM is listed on Euronext Amsterdam. (DSM, 2010)
Océ
Océ is one of the world’s leading providers of document management and
printing for professionals. The broad Océ offering includes office printing and
copying systems, high speed digital production printers and wide format
printing systems for both technical documentation and color display graphics.
Océ is also a foremost supplier of document management outsourcing. Many
of the world’s Fortune 500 companies and leading commercial printers are Océ
customers. The company was founded in 1877. With headquarters in Venlo,
The Netherlands, Océ is active in over 90 countries and employs some 23,000
people worldwide. In 2008 Océ achieved revenues of € 2.9 billion. Océ is listed
on Euronext Amsterdam. (Océ, 2010)
Philips
With main focus on Health and Well-being, Philips serves professional and
consumer markets through three overlapping sectors: Healthcare, Lighting and
Consumer Lifestyle. Throughout their portfolio, Philips demonstrates their
innovation capacity by translating customer insights into meaningful
technology and applications that improve the quality of people’s lives. Philips
has annual net sales of about €27 billion and employs some 125,000 people
worldwide. The company is headquartered in the Netherlands, and listed on
Euronext Amsterdam and New York Stock Exchange. (Philips, 2010)
DAF
DAF Trucks N.V. is a wholly owned subsidiary of the North-American
corporation PACCAR Inc. DAF Trucks’ core activities are focused on the
development, production, marketing and sale of medium and heavy-duty
commercial vehicles. DAF works according to the ‘Build to Order’ principle. This
means that all vehicles are built to satisfy each customer’s individual wishes,
but production only starts after the order is received from the customer. This is
very important, because DAF builds tens of thousands of different vehicle
versions which are all built to meet each customer’s individual specifications
and transport requirements. (DAF, 2010)
66
Appendix E: Interview Questionnaire
New product data refers to the input data for portfolio analyses. For example, the net
cash flow, the discount rate, and the time of the cash flow serve as inputs for a popular
portfolio management technique, net present value analysis. The quality of the input data
is defined as data that are fit for use by portfolio managers.
1. Is new product data quality of great concern for Océ/DAF/DSM/Philips? And why?
The portfolio management process
2. Literature on portfolio management has proposed several best practices. Which of
the following statements are applicable to Océ/DAF/DSM/Philips?
Portfolio management is recognized as an important task in the business.
Océ/DAF/DSM/Philips has an explicit, established method for portfolio
management.
Management buys into the method, and supports it through their actions.
The method has clear rules and procedures.
All projects are considered together and treated as a portfolio.
The method is consistently applied across all appropriate projects.
A combination of different portfolio management methods is used.
If applicable, which methods are mostly used?
Portfolio decisions are handled at meetings in which managers discuss projects
as a group, use their best judgment and make decisions.
If applicable, how frequent do these meeting take place and which
departments are involved?
67
3. The following questions refer to the formality of the portfolio management process.
Formality represents the degree to which rules, policies, and procedures govern the
portfolio management work activities.
To what degree are rules and procedures
formalized via documents?
Not at all
Somewhat
1
2
3
Completely
4
5
To what degree are rules actually
followed?
1
2
3
4
5
To what degree are formal reviews held?
1
2
3
4
5
4. Are there any other relevant aspects of the portfolio management process? If yes,
please give details.
5. Which relations/dependencies do you see between the portfolio management
process and new product data quality? Or does it merely act as an “enabler”?
The new product forecasting process
6. Literature on new product forecasting management has proposed several best
practices. Which of the following statements are applicable to
Océ/DAF/DSM/Philips?
Forecasts are prepared via a formal/routine process with clear and precise
instructions.
Forecasting is recognized as a separate functional area.
Forecasting is used to facilitate business planning (business plan and forecasts
are intertwined and developed together).
In which stage of the innovation process does forecasting start?
All key stakeholders from different departments are involved in the new
product forecasting process (cross-functional approach).
If applicable, which departments are involved?
68
Forecasting performance is formally rewarded (e.g. forecast performance is
included in individual performance plans and reward systems).
Forecasting performance is formally evaluated (e.g. multidimensional metrics
are established).
Access to relevant information is enabled across functional areas.
7. The following questions refer to the formality of the new product forecasting
process. Formality represents the degree to which rules, policies, and procedures
govern the new product forecasting work activities.
Not at all
Somewhat
To what degree are rules and procedures
1
2
3
formalized via documents?
Completely
4
5
To what degree are rules actually
followed?
1
2
3
4
5
To what degree are formal reviews held?
1
2
3
4
5
8. Are there any other relevant aspects of the new product forecasting process? If yes,
please give details.
9. Which relations/dependencies do you see between the new product forecasting
process and new product data quality? Or does it merely act as an “enabler”?
69
Assessing and improving data quality
Academics in the field of data and information management have identified four major
aspects of data quality. These are: believability, accuracy, objectivity, and reputation.
 Believability is defined as the extent to which data is regarded as true and
credible.
 Accuracy is defined as the extent to which information is correct and reliable.
 Objectivity is defined as the extent to which information is unbiased,
unprejudiced, and impartial.
 Reputation is defined as the extent to which information is highly regarded in
terms of its source and content.
10. From your point of view, what are other important data quality aspects?
11. How is new product data quality currently being assessed?
a. How is new product data believability currently being assessed?
b. How is new product data accuracy currently being assessed?
c. How is new product data objectivity currently being assessed?
d. How is new product data reputation currently being assessed?
12. Academics from various fields have proposed several ways to improve new product
data quality. Which of the following statements do you support? And why?
Please also rate on a scale from 1 - 5 how much each principle influences the quality
of new product data.
70
Overall New Product Data Quality
By establishing multidimensional metrics to measure forecast performance
sources of errors can be isolated and targeted for improvement.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
New product data quality can be improved by making the uncertainty of
the input explicit (e.g. interval forecasts or probability forecasts).
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
New product data quality improves over the phases of the innovation
process.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
Closeness to the core business is positively related to new product data
quality.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
The new product forecasting strategy (linking forecasting techniques to
type of new product) influences new product data quality.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
New product data quality can be improved by involving all key
stakeholders/experts from different departments in new product
forecasting.
Not at all
To what degree is new product data 1
2
quality influenced?
71
Somewhat
3
4
A lot
5
The degree to which rules, policies, and procedures govern the portfolio
management and/or new product forecasting work activities (i.e. process
formality) influences new product data quality.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
New Product Data Believability
Trustworthiness of the data source is positively related to new product data
believability. In order to be able to assess trustworthiness the data source
needs to be visible and traceable. Moreover, a reputation system should be
in place where trustworthiness values are saved.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
Believability of new product data is enhanced if different sources agree on
the data value.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
Believability of new product data is enhanced if the data value is consistent
with past data values (e.g. use of reference case forecasting).
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
Proximity of transaction time to valid time is positively related to new
product data believability. This means that long-term forecasts are less
believable than short-term forecasts.
Not at all
To what degree is new product data 1
2
quality influenced?
72
Somewhat
3
4
A lot
5
New Product Data Accuracy
Forecasting accuracy can be substantially improved through the
combination of multiple forecasts.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
Estimation accuracy can be improved by decomposing the estimation
problem into sub-problems, and then aggregating the sub-problems.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
Reliability of forecasts can be improved through the use of mechanical
methods to process information.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
Forecasting accuracy can be improved by enabling forecasters to adjust
system generated forecasts with their judgments.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
Assumption management (generation, translation, and tracking of
assumptions and their metrics) is an important step in the course of
forecasting new products to ensure high quality data.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
New Product Data Objectivity
Objectivity of new product data is enhanced if several independent forecasts
are combined (by analyzing different data / by using different forecasting
methods / by using independent forecasters).
Not at all
To what degree is new product data 1
2
quality influenced?
73
Somewhat
3
4
A lot
5
Reward structures that include disincentives for not meeting
forecasts will reduce biases in the forecast and thus new product data
objectivity will improve.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
By mathematically adjusting the values of the integral portfolio of R&D
projects biases can be reduced. For example, by using a correction factor that
is calculated form past data values.
Not at all
To what degree is new product data 1
2
quality influenced?
Somewhat
3
4
A lot
5
13. Are there any other ways to improve new product data quality? If yes, please give
details.
14. Are the aforementioned ways to assess and improve new product data quality
unique for Océ/DAF/DSM/Philips, or are these applicable to every firm?
a) Which factors account for this?
b) How do these factors exert influence?
Data structure
Project parameters/variables refer to the project inputs used in portfolio analyses (i.e.
new product data). Projects inputs might include financial data (e.g. amount of funding
and cost of capital), market data (e.g. market share, market size, and commercial risk),
and R&D data (e.g. technology risk, non-staffing costs, and staffing costs).
15. Which project parameters/variables are of greatest concern (impact-quality)? And
why?
16. Do you think that I should talk to somebody else at Océ/DAF/DSM/Philips concerning
certain subjects?
74
Appendix F: Confrontation Matrix
The confrontation matrix can be found on the next page (p. 76).
75
Process Formality
Use actual values and new insights
from market, technical, and financial
analysis to recalibrate forecasts
Temporality of Data
Assumption management
Correction factor
Mechanical methods
Sanity check
The weight of the correction factor is
calculated from overall past forecast
performance
How to evaluate the performance of
interval/probability forecasts?
Boundaries need to be set…
Include forecast performance in
individual performance plans.
Measure forecast performance
The interval/probability band can be
calculated from the combined
forecasts .
Consistency with past values is
implicitly tested in a sanity check
Past data values are generally only
available for certain types of new
products which include cost
reductions, product improvements,
and line extensions.
Long-term forecasts can only be
based on old reference cases
The outcomes of prior projects (the
reference class) can be based on
actual values or on planned values. In
the latter case information about
forecast performance is needed in
order to document the outcomes of
the projects.
Consistency with past values
(reference case forecasting)
Interval/probability forecasts
Experts should do a sanity check; a
basic test to quickly evaluate the
validity of the data.
Early cross-functional
expert/stakeholder involvement
The trustworthiness of a source can
be based on past forecast
performance of that particular
source.
The results achieved by new products
are subject to a wide variety of
uncontrollable events. Therefore, it is
not always possible to assign
accountability in the event that the
product does not live up to
expectations. By analyzing the
justification of forecasts
accountability can be assigned more
fairly. Assumption management
leaves an audit trail that can be
examined later to retrace the
arguments behind the judgments.
Requiring justification of forecasts
will make the forecasting process
more analytical. The resulting
assumptions and their metrics offer
an analytical framework for the
analysis of forecast performance.
Identifying a relevant reference class
of past, similar projects involves (1)
weighing similarities and differences
on many variables and (2)
determining which are the most
meaningful in forecasting. These two
steps are simplified when there is
information available about forecast
assumptions.
Early cross-functional
involvement
Justification of forecasts ensures a
increases agreement
healthy discussion about forecasts
in an early stage of
development
Agreement over
sources
In case of subjective weighting: a
heavier weight should be assigned to
trustworthy sources.
Closeness to the core business affects
expertise of the company.
Performing sanity checks on forecasts
for projects far from the core
business are more difficult.
Combining forecasts is most useful
when there is much uncertainty; this
should be incorporated in the strategy
The type of forecasting technique
depends on closeness to core
business/level of innovation
The results achieved by new products
are subject to a wide variety of
uncontrollable events. Therefore, it is
not always possible to assign
accountability in the event that the
product does not live up to
expectations. Decomposition methods
offer a structured framework for the
analysis and leaves an audit trail that
can be examined later to retrace the
arguments behind the judgments.
Combining forecasts is most useful
when there is much uncertainty, e.g.
in case of long-range forecasts.
In case of subjective weighting: a
heavier weight should be assigned to
short-term forecasts
Early cross-functional expert
involvement is a way to combine the
judgments of experts (judgmental
forecasting) which improves validity
Independent information may be of
two kinds: (1)…, (2) the forecast
makes a different assumption about
the form of the relationship between
the variables.
When combining multiple forecasts
the believability of new product data
is enhanced if different sources agree
on the data value.
Combination of independent
forecasts
Projects far from the core business
are long-term in general
Decomposition methods offer a
structured framework for the analysis
and leaves an audit trail that can be
examined later to retrace the
arguments behind the judgments.
Identifying a relevant reference class
of past, similar projects involves (1)
weighing similarities and differences
on many variables and (2)…
Decomposition methods provide a
structured framework to analyze
similarities/differences on a predefined set of variables.
By using decomposition methods
experts can focus on the sub-problem
that fits their expertise. Moreover, it
increases the effectiveness of the
discussion.
Synergy effects
Decomposition methods and
assumption management strengthen
each other
Decomposition
Trustworthiness of source
Reward structures (w/ disincentives)
Forecasting strategy
Corporate agreement on information
that is close to the core business.
Closeness to core business
Closeness to the core business affects
expertise of the company.
Involve experts and stakeholders
Close to core business: expertise lies
before their input is officially needed. within company
Far from core business: expertise lies
outside company
New and updated information
becomes available over time (from
market, technical, and financial
analysis). Together with actual values
this information serves to measure
forecast performance, isolate errors,
and target errors for improvement.
A formal procedure should be used to
combine forecasts in order to protect
against people's biases.
Over innovation phases (time)
Measure forecast performance
Consistency with past values
(reference case forecasting)
Early cross-functional
expert/stakeholder involvement
Assumption management
Agreement over sources
Combination of independent
forecasts
Decomposition
Closeness to core business
Over innovation phases (time)
Process Formality
The people responsible for
the forecast may no
longer be associated with
the company after a long
period of time.
The type of forecasting
technique depends on the
proximity of transaction
time to valid time.
Temporality of Data
Forecasting
strategy
Reward structures may
reduce bias in the sanity
check process. This would
result in more realistic
evaluations of forecasts.
Reward structures (w/
disincentives)
Checking the trustworthiness of the
source can be part of the sanity
check. Data is less believable if it
originates from untrustworthy
sources.
Trustworthiness of source
Interval/probability forecasts
Sanity check
Mechanical
methods
Correction factor
Appendix G: Out-Degree Centrality
Factors
Process Formality
Assumption management
Decomposition
Measure forecast performance
Closeness to core business
Forecasting strategy
Temporality of Data
Early cross-functional expert/stakeholder involvement
Trustworthiness of source
Over innovation phases (time)
Consistency with past values (reference case
forecasting)
Agreement over sources
Combination of independent forecasts
Sanity check
Reward structures (w/ disincentives)
Interval/probability forecasts
Mechanical methods
Correction factor
Table G1: Centrality out-degree of factors
77
Out-Degree Centrality
12
6
5
4
4
4
3
3
2
2
1
1
1
1
0
0
0
0