As technological innovation plays important role in today’s knowledge economy, intellectual property as the most important output of technological development is valued highly for generating monopoly position in providing payoffs to innovation. Intellectual Property Management (IPM) helps organizations to identify, enhance and evaluate their technological strength. Patent portfolio Model (PPM) is built for assessing the advantages and disadvantages of organization, identifying the opportunities of development potentials and optimal distribution, to support the decision-making for optimizing resource allocation and developing layout for technical field. The case study of research institute in china show that this method is feasible and fulfilled the needs of different institutions, so as to provide suggestions for R&D technology management.
Author(s): Li Shuyin, Zhang Xian, Xu Haiyun, Fang Shu
Organization(s): Chengdu Library and Information Center, Chinese Academy of Sciences
Source: IEEE Xplore: 2019 Portland International Conference on Management of Engineering and Technology (PICMET)
The quantitative study of science, technology and innovation (ST&I ) has experienced significant growth with advancements in disciplines such as mathematics, computer science and information sciences. From the early studies utilizing the statistics method, graph theory, to citations or co-authorship, the state of the art in quantitative methods leverages natural language processing and machine learning. However, there is no unified methodological approach within the research community or a comprehensive understanding of how to exploit text-mining potentials to address ST&I research objectives. Therefore, this chapter intends to present the state of the art of text mining within the framework of ST&I. The major contribution of the chapter is twofold; first, it provides a review of the literature on how text mining extended the quantitative methods applied in ST&I and highlights major methodological challenges. Second, it discusses two hands-on detailed case studies on how to implement the text analytics routine.
Author(s): Samira Ranaei, Arho Suominen, Alan Porter, Tuomo Kässi
Organization(s): Lappeenranta University of Technology (LUT), VTT Technical Research Centre of Finland
Source: Springer Handbook of Science and Technology Indicators
Topic extraction presents challenges for the bibliometric community, and its performance still depends on human intervention and its practical areas. This paper proposes a novel kernel k-means clustering method incorporated with a word embedding model to create a solution that effectively extracts topics from bibliometric data. The experimental results of a comparison of this method with four clustering baselines (i.e., k-means, fuzzy c-mean as,principal component analysis, and topic models) on two bibliometric datasets demonstrate its effectiveness across either a relatively broad range of disciplines or a given domain. An empirical study on bibliometric topic extraction from articles published by three top-tier bibliometric journals between 2000 and 2017, supported by expert knowledge-based evaluations, provides supplemental evidence of the method’s ability on topic extraction. Additionally, this empirical analysis reveals insights into both overlapping and diverse research interests among the three journals that would benefit journal publishers, editorial boards, and research communities.
Author(s): Yi Zhang, Jie Lu, Feng Liu, Qian Liu, Alan Porter, Hongshu Chen, Guangquan Zhang
Organization(s): University of Technology Sydney, Beijing Institute of Technology, Georgia Institute of Technology
Source: Journal of Informetrics
This chapter summarizes the 10-year experiences of the Program in Science, Technology, and Innovation Policy (STIP) at Georgia Institute of Technology (Georgia Tech) in support of the Center for Nanotechnology in Society at Arizona State University (CNS-ASU) in understanding, characterizing, and conveying the development of nanotechnology research and application. This work was labeled “Research and Innovation Systems Assessment” or (RISA) by CNS-ASU. CNS-ASU was designed to implement a set of methods to anticipate societal impacts (including environmental, health, and safety impacts) and lay the foundation for making changes to emerging technologies at an early stage in their development.
RISA concentrates on identifying and documenting quantifiable aspects of nanotechnology, including academic, commercial/industrial, and government nanoscience and nanotechnology (nanotechnologies) activity, research, and projects. RISA at CNS-ASU engaged in the first systematic attempt of its kind to define, characterize, and track a field of science and technology. A key element to RISA was the creation of a replicable approach to bibliometrically defining nanotechnology. Researchers in STIP, and beyond, could then query the resulting datasets to address topical areas ranging from basic country and regional concentrations of publications and patents to findings about social science literature, environmental, health, and safety research and usage, to study corporate entry into nanotechnology and to explore application areas as special interests arose. Key features of the success of the program include the following:
- Having access to “large-scale” R&D abstract datasets
- Analytical software
- A portfolio that balances innovative long-term projects, such as webscraping to understand nanotechnology developments in small and medium-sized companies, with research characterizing the emergence of nanotechnology that more readily produces articles
- Relationships with diverse networks of scholars and companies working in the nanotechnology science and social science domains
- An influx of visiting researchers
- A strong core of students with social science, as well as some programming background
- A well-equipped facility and management by the principals through weekly problem-solving meetings, mini-deadlines, and the production journal articles rather than thick final reports.
Author(s): Jan Youtie, Alan L.Porter, Philip Shapira, Nils Newman
Organization(s): Georgia Institute of Technology, Search Technology
Source: Nanotechnology Environmental Health and Safety (Third Edition)
Business intelligence enables enterprises to make effective and good quality business decisions. In the knowledge economy, patents are seen as strategic assets for companies as they provide a competitive advantage and at the same time ensure the freedom to operate and form the basis for new alliances. Publication or disclosure of intellectual property (IP) strategy based on patent filings is rarely available in the public domain. Because of this, the only way to understand IP strategy is to look at patent filings, analyze them and, based on the trends, deduce strategy. This paper tries to uncover IP strategies of five US and Indian IT companies by analyzing their patent filings. Gathering business intelligence via means of patent analytics can be used to understand the strategies used by companies in advocating their patent portfolio and aligning their business needs with patenting activities. This study reveals that the Indian companies are far behind in protecting their IPs, although they are now on course correction and have started aggressively protecting their inventions. It is also observed that the rival companies in the study are not directly competing with each other in the same technological domain. Different patent filing strategies are used by firms to gain a competitive advantage. Companies make use of disclosure as strategy or try to cover many aspects of a technology in a single patent, thereby signaling their dominance in a technological area and at the same time as they add information.
Author(s): Shabib-Ahmed Shaikh, Tarun Kumar Singhal
Organization(s): Symbiosis International University (SIU), Symbiosis Centre for Management Studies
Source: Journal of Intelligence Studies in Business
The variations in Manufacturing Strategy (MS) definitions create confusion and lead to lack of shared understanding between academic researchers and practitioners on its scope. The purpose of this study is to provide an empirical analysis of the paradox in the difference between academic and industry definitions of MS. Natural Language Processing (NLP) based text mining is used to extract primary elements from the various academic, and industry definitions of MS. Co-word and Principal Component Analysis (PCA) provide empirical support for the grouping into nine primary elements. We posit from the terms evolution analysis that there is a stasis currently faced in academic literature towards MS definition while the industry with its emphasis on ‘context’ has been dynamic. We believe that the proposed approach and results of the present empirical analysis can contribute to overcoming the current challenges to MS design and deployment – imprecise definition leading to its inadequate operationalisation.
Author(s): Sourabh Kulkarni, Priyanka Verma, R. Mukundan
Organization(s): National Institute of Industrial Engineering
Source: International Journal of Production Research
Technology Watch human agents have to read many documents in order to manually categorize and dispatch them to the correct expert, that will later add valued information to each document. In this two step process, the first one, the categorization of documents, is time consuming and relies on the knowledge of a human categorizer agent. It does not add direct valued information to the process that will be provided in the second step, when the document is revised by the correct expert.
This paper proposes Machine Learning tools and techniques to learn from the manually pre-categorized data to automatically classify new content. For this work a real industrial context was considered. Text from original documents, text from added value information and Semantic Annotations of those texts were used to generate different models, considering manually pre-established categories. Moreover, three algorithms from different approaches were used to generate the models. Finally, the results obtained were compared to select the best model in terms of accuracy and also on the reduction of the amount of document readings (human workload).
Author(s): Alain Perez, Rosa Basagoiti, Ronny Adalberto Cortez, Felix Larrinaga, Ekaitz Barrasa, Ainara Urrutia
Organization(s): Mondragon Unibertsitatea
Source: Data & Knowledge Engineering
As one of the most impactful emerging technologies, big data analytics and its related applications are powering the development of information technologies and are significantly shaping thinking and behavior in today’s interconnected world. Exploring the technological evolution of big data research is an effective way to enhance technology management and create value for research and development strategies for both government and industry. This paper uses a learning-enhanced bibliometric study to discover interactions in big data research by detecting and visualizing its evolutionary pathways. Concentrating on a set of 5840 articles derived from Web of Science covering the period between 2000 and 2015, text mining and bibliometric techniques are combined to profile the hotspots in big data research and its core constituents. A learning process is used to enhance the ability to identify the interactive relationships between topics in sequential time slices, revealing technological evolution and death. The outputs include a landscape of interactions within big data research from 2000 to 2015 with a detailed map of the evolutionary pathways of specific technologies. Empirical insights for related studies in science policy, innovation management, and entrepreneurship are also provided.
Author(s): Yi Zhang, Ying Huang, Alan L. Porter, Guangquan Zhang, Jie Lu
Organization(s): University of Technology Sydney, Hunan University
Source: Technological Forecasting and Social Change
Technology strategy plays an increasingly important role in today’s Mergers and Acquisitions (M&A) activities. Informing that strategy with empirical intelligence offers great potential value to R&D managers and technology policy makers. This paper proposes a methodology, based on patent analysis, to extract technical intelligence to identify M&A target technologies and evaluate relevant target companies to facilitate M&A target selection. We apply the term clumping process and a trend analysis together with policy and market information to profile present R&D status and capture future development signals and trends in order to grasp a range of significant domain-based technologies. Furthermore, a comparison between a selected acquirer and leading players is used to identify significant technologies and sub-technologies for specific strategy-oriented technology M&A activities. Finally, aiming to recommend appropriate M&A target companies, we set up an index-based system to evaluate the acquired target candidates from both firms-side perspective and target firm-side perspective and differentially weigh for specific M&A situations. We provide an empirical study in the field of computer numerical control machine tools (CNCMT) in China to identify technology M&A targets for an emerging Chinese CNCMT company — Estun Automation under different M&A strategies.
Author(s): Tingting Ma, Yi Zhang, Lu Huang, Lining Shang, Kangrui Wang, Huizhu Yu, Donghua Zhu
Organization(s): Beijing Wuzi University, Beijing Institute of Technology, University of Technology Sydney
Source: Technological Forecasting and Social Change
This paper performs a quantitative analysis of trends in technology mining (TM) approaches using 5 years (2011–2015) of Global TechMining (GTM) conference proceedings as a data source. These proceedings are processed with a help of Vantage Point software, providing an approach “tech mining for analyzing tech mining.” Through quantitative data processing (bibliometric analysis, natural language processing, statistical analysis, principal component analysis (PCA)), this study presents an overview, explores dynamics and potentials for existing and advanced TM methodologies in three layers: related methods, data sources, and software tools. The main groups and combinations of TM and related methods are identified. Key trends and weak signals concerning the use of existing (natural language processing (NLP), mapping, network analysis, etc.) and emerging methods (web scraping, ontology modeling, advanced bibliometrics, semantic the theory of inventive problem solving (TRIZ), sentiment analysis, etc.) are detected. The results are considered to be taken as a guide for researchers, practitioners, or policy makers involved in foresight activity.
Author(s): Nadezhda Mikova
Organization(s): Higher School of Economics
Source: Anticipating Future Innovation Pathways Through Large Data Analysis pp 59-69