Business intelligence enables enterprises to make effective and good quality business decisions. In the knowledge economy, patents are seen as strategic assets for companies as they provide a competitive advantage and at the same time ensure the freedom to operate and form the basis for new alliances. Publication or disclosure of intellectual property (IP) strategy based on patent filings is rarely available in the public domain. Because of this, the only way to understand IP strategy is to look at patent filings, analyze them and, based on the trends, deduce strategy. This paper tries to uncover IP strategies of five US and Indian IT companies by analyzing their patent filings. Gathering business intelligence via means of patent analytics can be used to understand the strategies used by companies in advocating their patent portfolio and aligning their business needs with patenting activities. This study reveals that the Indian companies are far behind in protecting their IPs, although they are now on course correction and have started aggressively protecting their inventions. It is also observed that the rival companies in the study are not directly competing with each other in the same technological domain. Different patent filing strategies are used by firms to gain a competitive advantage. Companies make use of disclosure as strategy or try to cover many aspects of a technology in a single patent, thereby signaling their dominance in a technological area and at the same time as they add information.
Author(s): Shabib-Ahmed Shaikh, Tarun Kumar Singhal
Organization(s): Symbiosis International University (SIU), Symbiosis Centre for Management Studies
Source: Journal of Intelligence Studies in Business
The variations in Manufacturing Strategy (MS) definitions create confusion and lead to lack of shared understanding between academic researchers and practitioners on its scope. The purpose of this study is to provide an empirical analysis of the paradox in the difference between academic and industry definitions of MS. Natural Language Processing (NLP) based text mining is used to extract primary elements from the various academic, and industry definitions of MS. Co-word and Principal Component Analysis (PCA) provide empirical support for the grouping into nine primary elements. We posit from the terms evolution analysis that there is a stasis currently faced in academic literature towards MS definition while the industry with its emphasis on ‘context’ has been dynamic. We believe that the proposed approach and results of the present empirical analysis can contribute to overcoming the current challenges to MS design and deployment – imprecise definition leading to its inadequate operationalisation.
Author(s): Sourabh Kulkarni, Priyanka Verma, R. Mukundan
Organization(s): National Institute of Industrial Engineering
Source: International Journal of Production Research
Technology Watch human agents have to read many documents in order to manually categorize and dispatch them to the correct expert, that will later add valued information to each document. In this two step process, the first one, the categorization of documents, is time consuming and relies on the knowledge of a human categorizer agent. It does not add direct valued information to the process that will be provided in the second step, when the document is revised by the correct expert.
This paper proposes Machine Learning tools and techniques to learn from the manually pre-categorized data to automatically classify new content. For this work a real industrial context was considered. Text from original documents, text from added value information and Semantic Annotations of those texts were used to generate different models, considering manually pre-established categories. Moreover, three algorithms from different approaches were used to generate the models. Finally, the results obtained were compared to select the best model in terms of accuracy and also on the reduction of the amount of document readings (human workload).
Author(s): Alain Perez, Rosa Basagoiti, Ronny Adalberto Cortez, Felix Larrinaga, Ekaitz Barrasa, Ainara Urrutia
Organization(s): Mondragon Unibertsitatea
Source: Data & Knowledge Engineering
As one of the most impactful emerging technologies, big data analytics and its related applications are powering the development of information technologies and are significantly shaping thinking and behavior in today’s interconnected world. Exploring the technological evolution of big data research is an effective way to enhance technology management and create value for research and development strategies for both government and industry. This paper uses a learning-enhanced bibliometric study to discover interactions in big data research by detecting and visualizing its evolutionary pathways. Concentrating on a set of 5840 articles derived from Web of Science covering the period between 2000 and 2015, text mining and bibliometric techniques are combined to profile the hotspots in big data research and its core constituents. A learning process is used to enhance the ability to identify the interactive relationships between topics in sequential time slices, revealing technological evolution and death. The outputs include a landscape of interactions within big data research from 2000 to 2015 with a detailed map of the evolutionary pathways of specific technologies. Empirical insights for related studies in science policy, innovation management, and entrepreneurship are also provided.
Author(s): Yi Zhang, Ying Huang, Alan L. Porter, Guangquan Zhang, Jie Lu
Organization(s): University of Technology Sydney, Hunan University
Source: Technological Forecasting and Social Change
Technology strategy plays an increasingly important role in today’s Mergers and Acquisitions (M&A) activities. Informing that strategy with empirical intelligence offers great potential value to R&D managers and technology policy makers. This paper proposes a methodology, based on patent analysis, to extract technical intelligence to identify M&A target technologies and evaluate relevant target companies to facilitate M&A target selection. We apply the term clumping process and a trend analysis together with policy and market information to profile present R&D status and capture future development signals and trends in order to grasp a range of significant domain-based technologies. Furthermore, a comparison between a selected acquirer and leading players is used to identify significant technologies and sub-technologies for specific strategy-oriented technology M&A activities. Finally, aiming to recommend appropriate M&A target companies, we set up an index-based system to evaluate the acquired target candidates from both firms-side perspective and target firm-side perspective and differentially weigh for specific M&A situations. We provide an empirical study in the field of computer numerical control machine tools (CNCMT) in China to identify technology M&A targets for an emerging Chinese CNCMT company — Estun Automation under different M&A strategies.
Author(s): Tingting Ma, Yi Zhang, Lu Huang, Lining Shang, Kangrui Wang, Huizhu Yu, Donghua Zhu
Organization(s): Beijing Wuzi University, Beijing Institute of Technology, University of Technology Sydney
Source: Technological Forecasting and Social Change
This paper performs a quantitative analysis of trends in technology mining (TM) approaches using 5 years (2011–2015) of Global TechMining (GTM) conference proceedings as a data source. These proceedings are processed with a help of Vantage Point software, providing an approach “tech mining for analyzing tech mining.” Through quantitative data processing (bibliometric analysis, natural language processing, statistical analysis, principal component analysis (PCA)), this study presents an overview, explores dynamics and potentials for existing and advanced TM methodologies in three layers: related methods, data sources, and software tools. The main groups and combinations of TM and related methods are identified. Key trends and weak signals concerning the use of existing (natural language processing (NLP), mapping, network analysis, etc.) and emerging methods (web scraping, ontology modeling, advanced bibliometrics, semantic the theory of inventive problem solving (TRIZ), sentiment analysis, etc.) are detected. The results are considered to be taken as a guide for researchers, practitioners, or policy makers involved in foresight activity.
Author(s): Nadezhda Mikova
Organization(s): Higher School of Economics
Source: Anticipating Future Innovation Pathways Through Large Data Analysis pp 59-69
Morphology analysis, despite being a strong stimulus for the development of new alternatives, largely relies on domain experts and neglects the relationships between keywords in the construction of morphological structures. In addition, there are few systematic approaches to prioritize the morphological configurations. To address these issues, a hybrid approach is proposed, which enhances the performance of morphology analysis by combining it with subject–action–object (SAO) semantic analysis. Initially, a keyword co-occurrence patent set for subsequent SAO analysis is prepared based on keywords frequency vector analysis. Then, SAO structures are extracted and semantic analysis is performed to identify the relationships between keywords, which help to build morphological structures more objectively. In addition, a well-defined evaluation system that contains eight sub-indexes is proposed to evaluate the morphological configurations. Finally, to demonstrate and validate the proposed approach, the dye-sensitized solar cells technology is employed as the case study. Results indicate that the most promising combination we predict appears frequently in 2012–2014 and the distribution of it is also close to the fact in 2012–2014. Accordingly, the proposed method can be used to effectively determine the direction of technological change and to forecast technology innovation opportunities.
Author(s): Junfang Guo, Xuefeng Wang, Qianrui Li, Donghua Zhu
Organization(s): Beijing Institute of Technology
Source: Technological Forecasting and Social Change
Technology trend analysis offers a flexible instrument to understand both opportunity and competition for emerging technologies. Semantic information is used in Science, Technology & Innovation (ST&I) records which makes the technology trend analysis more challenging. This paper proposes a semantic-based approach for technology trend analysis through emphasizing Subject-Action-Object (SAO) structure, It also applies the trend analysis approach to extract technology information and identify and predict the trend of technology development more effectively. An empirical study on Graphene is completed to demonstrate the proposed trend analysis approach.
Author(s): Yang, Chao ; Zhu, Donghua ; Zhang, Guangquan
Organization(s): Beijing Institute of Technology, University of Technology Sydney
Source: 2015 10th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)
Medicinal plants have been used in the prevention, diagnosis, and elimination of diseases based on the practical experience of thousands of years. There is a pressing need to initiate and transform laboratory research into fruitful formulations leading to the development of newer products for the cure of diseases such as AIDS, cancer, and hepatitis, as well as coping with multi-drug resistance problems. This book presents recent developments in the research on medicinal plants for different diseases, formulation of products, and market strategy.
This chapter discusses the potential of a group of Latin American plants in applications in the cosmetic, perfume, and flavor industries. It is organized in six parts: 1) text mining of scientific literature for plants, applications, and countries, 2) botanical description of the species, 3) chemical and biological aspects. 4) exploitation and sustainability of natural resources, 5) development of patents, and 6) formulations and applications.
Author(s): Amner Muñoz Acevedo, Erika A. Torres, Ricardo G. Gutierrez, Sandra B. Coles, Martha Cervantes-Diaz, and Geovanna Tafurt-Garcia
Organization(s): Universidad del Norte, Universidad Santo Tomás, Universidad Nacional de Colombia
Source: Therapeutic Medicinal Plants: From Lab to Market, CRC Press, 2016
The delineation of coordinates is fundamental for the cartography of science, and accurate and credible classification of scientific knowledge presents a persistent challenge in this regard. We present a map of Finnish science based on unsupervised-learning classification, and discuss the advantages and disadvantages of this approach vis-à-vis those generated by human reasoning. We conclude that from theoretical and practical perspectives there exist several challenges for human reasoning-based classification frameworks of scientific knowledge, as they typically try to fit new-to-the-world knowledge into historical models of scientific knowledge, and cannot easily be deployed for new large-scale data sets. Automated classification schemes, in contrast, generate classification models only from the available text corpus, thereby identifying credibly novel bodies of knowledge. They also lend themselves to versatile large-scale data analysis, and enable a range of Big Data possibilities. However, we also argue that it is neither possible nor fruitful to declare one or another method a superior approach in terms of realism to classify scientific knowledge, and we believe that the merits of each approach are dependent on the practical objectives of analysis.
Full-text available at http://onlinelibrary.wiley.com/doi/10.1002/asi.23596/full
Author(s): Arho Suominen and Hannes Toivanen
Organization(s): VTT Technical Research Centre of Finland and Lappeenranta University of Technology
Source: Journal of the Association for Information Science and Technology