Wednesday, 31 January 2018

Internet Of Things:

A thing, in the Internet of Things, can be a person with a heart monitor implant, a farm animal with a biochip transponder, an automobile that has built-in sensors to alert the driver when tire pressure is low -- or any other natural or man-made object that can be assigned an IP address and provided with the ability to transfer data over a network.
IoT has evolved from the convergence of wireless technologies, micro-electromechanical systems (MEMS), microservices and the internet. The convergence has helped tear down the silo walls between operational technology (OT) and information technology (IT), allowing unstructured machine-generated data to be analyzed for insights that will drive improvements.
     
“Today computers -- and, therefore, the internet -- are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes (a petabyte is 1,024 terabytes) of data available on the internet were first captured and created by human beings by typing, pressing a record button, taking a digital picture or scanning a bar code. 
The problem is, people have limited time, attention and accuracy -- all of which means they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things -- using data they gathered without any help from us -- we would be able to track and count everything and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling and whether they were fresh or past their best.”
IPv6’s huge increase in address space is an important factor in the development of the Internet of Things. According to Steve Leibson, who identifies himself as “occasional docent at the Computer History Museum,” the address space expansion means that we could “assign an IPV6 address to every atom on the surface of the earth, and still have enough addresses left to do another 100+ earths.” In other words, humans could easily assign an IP address to every "thing" on the planet. An increase in the number of smart nodes, as well as the amount of upstream data the nodes generate, is expected to raise new concerns about data privacydata sovereignty and security. 
Practical applications of IoT technology can be found in many industries today, including precision agriculturebuilding management, healthcare, energy and transportation. Connectivity options for electronics engineers and application developers working on products and systems for the Internet of Things include:
Although the concept wasn't named until 1999, the Internet of Things has been in development for decades. The first internet appliance, for example, was a Coke machine at Carnegie Melon University in the early 1980s. The programmers could connect to the machine over the internet, check the status of the machine and determine whether or not there would be a cold drink awaiting them, should they decide to make the trip down to the machine.

Consumer application

A growing portion of IoT devices are created for consumer use. Examples of consumer applications include connected car, entertainment, home automation (also known as smart home devices), wearable technology, quantified self, connected health, and appliances such as washer/dryers, robotic vacuums, air purifiers, ovens, or refrigerators/freezers that use Wi-Fi for remote monitoring. Consumer IoT provides new opportunities for user experience and interfaces.[citation needed]
Some consumer applications have been criticized for their lack of redundancy and their inconsistency, leading to a popular parody known as the “Internet of Shit.” Companies have been criticized for their rush into IoT, creating devices of questionable value,and not setting up stringent security standards.

Smart Home

IoT devices are a part of the larger concept of home automation, also known as domotics. Large smart home systems utilize a main hub or controller to provide users with a central control for all of their devices. These devices can include lighting, heating and air conditioning, media and security systems. Ease of usability is the most immediate benefit to connecting these functionalities. Long term benefits can include the ability to create a more environmentally friendly home by automating some functions such as ensuring lights and electronics are turned off. One of the major obstacles to obtaining smart home technology is the high initial cost.
Applications
One key application of smart home is to provide assistance for disabled and elderly individuals. These home systems utilize assistive technology to accommodate an owner's specific disabilities.Voice control can assist users with sight and mobility limitations while alert systems can be connected directly to Cochlear implants worn by hearing impaired users. They can also be equipped with additional safety features. These features can include sensors that monitor for medical emergencies such as falls or seizures. Smart home technology applied in this way can provide users with more freedom and a higher quality of life.
A second application of smart home is even more sophisticated. One can guide his or her connected device at home even from far away. If one for example leaves the office, it is possible to tell a connected air conditioner device via smart phone to cool down the house to a certain temperature.
Another example would be to use smart devices as for examples Amazon's Alexa to get the most recent and most important news of the day while cutting the vegetables for the meal you are cooking at the moment. In general, Smart Home devices make life easier at home and give us the possibility to make several things at the same time.

Enterprise

The term "Enterprise IoT," or EIoT, is used to refer to all devices used in business and corporate settings. By 2019, it is estimated the EIoT will account for nearly 40% or 9.1 billion devices.

Media

Media use of the Internet of things is primarily concerned with marketing and studying consumer habits. Through behavioral targeting these devices collect many actionable points of information about millions of individuals. Using the profiles built during the targeting process, media producers present display advertising in line with the consumer's known habits at a time and location to maximize its effect. Further information is collected by tracking how consumers interact with the content. This is done through conversion tracking, drop off rate, click through rate, registration rate and interaction rate. The size of the data often presents challenges as it crosses into the realm of big data. However, in many cases benefits gained from the data stored greatly out weighs these challenges.

Infrastructure Management

Monitoring and controlling operations of urban and rural infrastructures like bridges, railway tracks, on- and offshore- wind-farms is a key application of the IoT. The IoT infrastructure can be used for monitoring any events or changes in structural conditions that can compromise safety and increase risk. It can also be used for scheduling repair and maintenance activities in an efficient manner, by coordinating tasks between different service providers and users of these facilities.IoT devices can also be used to control critical infrastructure like bridges to provide access to ships. Usage of IoT devices for monitoring and operating infrastructure is likely to improve incident management and emergency response coordination, and quality of service, up-times and reduce costs of operation in all infrastructure related areas. Even areas such as waste management can benefit  from automation and optimization that could be brought in by the IoT.

Manufacturing

Network control and management of manufacturing equipment, asset and situation management, or manufacturing process control bring the IoT within the realm of industrial applications and smart manufacturing as well. The IoT intelligent systems enable rapid manufacturing of new products, dynamic response to product demands, and real-time optimization of manufacturing production and supply chain networks, by networking machinery, sensors and control systems together.
Digital control systems to automate process controls, operator tools and service information systems to optimize plant safety and security are within the purview of the IoT. But it also extends itself to asset management via predictive maintenance, statistical evaluation, and measurements to maximize reliability. Smart industrial management systems can also be integrated with the Smart Grid, thereby enabling real-time energy optimization. Measurements, automated controls, plant optimization, health and safety management, and other functions are provided by a large number of networked sensors.
The term industrial Internet of things (IIoT) is often encountered in the manufacturing industries, referring to the industrial subset of the IoT. IIoT in manufacturing could generate so much business value that it will eventually lead to the fourth industrial revolution, so the so-called Industry 4.0. It is estimated that in the future, successful companies will be able to increase their revenue through Internet of things by creating new business models and improve productivity, exploit analytics for innovation, and transform workforce. The potential of growth by implementing IIoT will generate $12 trillion of global GDP by 2030.
                  

Agriculture

The IoT contributes significantly towards innovating farming methods. Farming challenges caused by population growth and climate change have made it one of the first industries to utilize the IoT. The integration of wireless sensors with agricultural mobile apps and cloud platforms helps in collecting vital information pertaining to the environmental conditions – temperature, rainfall, humidity, wind speed, pest infestation, soil humus content or nutrients, besides others – linked with a farmland, can be used to improve and automate farming techniques, take informed decisions to improve quality and quantity, and minimize risks and wastes. The app-based field or crop monitoring also lowers the hassles of managing crops at multiple locations. For example, farmers can now detect which areas have been fertilised (or mistakenly missed), if the land is too dry and predict future yields.

Energy management

Integration of sensing and actuation systems, connected to the Internet, is likely to optimize energy consumption as a whole. It is expected that IoT devices will be integrated into all forms of energy consuming devices (switches, power outlets, bulbs, televisions, etc.) and be able to communicate with the utility supply company in order to effectively balance power generation and energy usage.Such devices would also offer the opportunity for users to remotely control their devices, or centrally manage them via a cloud-based interface, and enable advanced functions like scheduling (e.g., remotely powering on or off heating systems, controlling ovens, changing lighting conditions etc.).
Besides home-based energy management, the IoT is especially relevant to the Smart Grid since it provides systems to gather and act on energy and power-related information in an automated fashion with the goal to improve the efficiency, reliability, economics, and sustainability of the production and distribution of electricity. Using advanced metering infrastructure (AMI) devices connected to the Internet backbone, electric utilities can not only collect data from end-user connections but also, manage other distribution automation devices like transformers and reclosers.

Environmental monitoring

Environmental monitoring applications of the IoT typically use sensors to assist in environmental protection by monitoring air or water quality, atmospheric or soil conditions, and can even include areas like monitoring the movements of wildlife and their habitats. Development of resource-constrained devices connected to the Internet also means that other applications like earthquake or tsunami early-warning systems can also be used by emergency services to provide more effective aid. IoT devices in this application typically span a large geographic area and can also be mobile. It has been argued that the standardization IoT brings to wireless sensing will revolutionize this area.

Building and home automation

IoT devices can be used to monitor and control the mechanical, electrical and electronic systems used in various types of buildings (e.g., public and private, industrial, institutions, or residential) in home automation and building automation systems. In this context, three main areas are being covered in literature:
  • The integration of the internet with building energy management systems in order to create energy efficient and IOT driven “smart buildings”.
  • The possible means of real-time monitoring for reducing energy consumption and monitoring occupant behaviors.
  • The integration of smart devices in the built environment and how they might be used in future applications.

Metropolitan scale deployments

There are several planned or ongoing large-scale deployments of the IoT, to enable better management of cities and systems. For example, Songdo, South Korea, the first of its kind fully equipped and wired smart city, is on near completion. Nearly everything in this city is planned to be wired, connected and turned into a constant stream of data that would be monitored and analyzed by an array of computers with little, or no human intervention.[citation needed]
Another application is a currently undergoing project in Santander, Spain. For this deployment, two approaches have been adopted. This city of 180,000 inhabitants has already seen 18,000 downloads of its city smartphone app. The app is connected to 10,000 sensors that enable services like parking search, environmental monitoring, digital city agenda, and more. City context information is used in this deployment so as to benefit merchants through a spark deals mechanism based on city behavior that aims at maximizing the impact of each notification.
Other examples of large-scale deployments underway include the Sino-Singapore Guangzhou Knowledge City; work on improving air and water quality, reducing noise pollution, and increasing transportation efficiency in San Jose, California; and smart traffic management in western Singapore. French company, Sigfox, commenced building an ultra-narrowband wireless data network in the San Francisco Bay Area in 2014, the first business to achieve such a deployment in the U.S. It subsequently announced it would set up a total of 4000 base stations to cover a total of 30 cities in the U.S. by the end of 2016, making it the largest IoT network coverage provider in the country thus far.
Another example of a large deployment is the one completed by New York Waterways in New York City to connect all the city's vessels and be able to monitor them live 24/7. The network was designed and engineered by Fluidmesh Networks, a Chicago-based company developing wireless networks for critical applications. The NYWW network is currently providing coverage on the Hudson River, East River, and Upper New York Bay. With the wireless network in place, NY Waterway is able to take control of its fleet and passengers in a way that was not previously possible. New applications can include security, energy and fleet management, digital signage, public Wi-Fi, paperless ticketing and others.

The Introduction To Ipv6 Vs Ipv4 Information

The transfer of information over the internet is a very complicated process which requires proper mechanisms to ensure that users get quality services in the shortest time ever. One way to enforce this is through the use of internet protocols which can be defined as accepted standards and regulations determining how information is transferred from one computer to another. Addressing is a major component of available internet protocols more so because of the large number of internet users all over the world. To enforce this, two popular addressing mechanisms exist; IPV4 and IPV6. The purpose of this paper is to highlight the history and the features of these protocols together with their associated strengths and weaknesses. The paper also highlights the associated costs involved when implementing the two protocols. The paper concludes by analyzing the future and trends of the two addressing protocols.
                                          IPV4 or internet protocol version 4 traces its origin from the time the internet was discovered. It can be attributed as a result of various projects and attempts by Defense Advanced Research Project Agency in the better part of 1970's. The initial protocol was implemented as TCP (transfer control protocol) though subsequent developments have seen the two elements being separated. The fact IPV4 was designed to work on closed settings meant that the developers overlooked such issues like security and access mechanisms. However with the introduction and popularization of the internet, IPV6 started to be used in the "open, non trusted, unsecured, external network environments as well…" (Majastor, 2003, pg1). In subsequent years the growth of the internet was tremendous, a condition which raised serious issues regarding the available number of address spaces for each and existing internet device. This saw initial efforts begun towards the development of a new protocol IPV6.
                                                   IPV6 or internet protocol version 6 is started off as an effort by Internet Engineering Task Force (IETF) in the early 90's to address various limitations that were presented by IPV6. The main and initial focus was centered on the need to solve the problem of inadequate address space. In 1994 the Internet Engineering Steering Group (IESG) approved IPV6 and the subsequent standards were adopted by IETF in 1998 (Hagen, 2002, pg 2). IPV6 is often referred to as the Next Generation Internet Protocol or just IPng although it is being put into practice in most internet devises today.
Majority of internet users use IPV4, a protocol which has been around for almost thirty years. IPV4 was designed to act as a connectionless mode of delivery specifically on the network link layer. This means that it does not guarantee delivery of data packets in a switched network. IPV4 uses 32 bit or 4 byte addresses which mean that the largest possible address space is limited t o 232 distinct addresses. Additionally the packet size of an IPV4 packet is only limited to 64 kilobytes of dataIPV4 addresses are presented in a unique manner which incorporates the use of decimal in between numeric values commonly referred to as dot decimal notation. For instance 192.168.0.3 whereby each octet represents a specific identifier in the entire network. The first octet represents the subnet mask of the network while the last three octets identify a specific network user or the host. IPV4 offers optional IPSec security mechanism although the packet header includes checksums to enhance data integrity. Anyone willing to set up a network which will use IPV4 addressing must manually configure the network or incorporate the valuable use of the DHCP server.
IPV4 employs both classless and classfull addressing mechanisms. Classfull addressing employs the use of various classes to assign network addresses based on the host, new net work and even reserve options for future network users. On the other hand classless addressing is the most common today since it employs the use of popular subnet masks. The Internet Management Group Protocol (IMGP) manages these subnets and is responsible for allocation and preservation of special purpose addresses. IPV4 uses the traditional method of broadcasting addressing to all nodes in a network before initiating a data transfer process. Some of the inadequacies present in IPV4 are addressed in IPV6.
The most distinct features differentiating IPV6 from IPV4 is in the size and number of address space. IPV6 supports address space of 128 bits long almost twice as that of IPV4. This means that the protocol can support as many addresses as up to 295 for practically every individual on the planet earth. Apart from the unique feature there are other changes that make IPV6 far more superior than IPV4. "The IPV6 package includes important features such as higher scalability, better data integrity, QoS features, auto configuration mechanisms that make it manageable even for higher numbers of dynamically connecting devices, improved routing aggregation in the backbone, and improved multicast routing" (Hagen, 2002, pg 4). The address structure of an IPV6 user is composed of up to 40 octets which is perhaps responsible for the large number of available address spaces. These addresses are represented in two distinct logical divisions which are separated by colons unlike decimals in IPV6. The first section represents the subnet mask while the second option stands represents the host.
The operational of IPV6 is stateless whereby the internet user or the host acquires the address automatically. Additionally IPV6 incorporates the use of multicasting technology in packet transmission which makes it easy in the transmission of multimedia information. IPV6 uses jumbograms which are packet of up to 232-1 octets as compared to IPV6 (216-1 ) which significantly improve the performance of the network. The IPV6 is more enhanced though it inherits and preserves some features from IPV4. The header is composed of a fixed portion length in fixed but the existence of extension headers provides more options for transmitting large packets of data, enhance security and routing options.(Hagen, 2002, pg 16).
IPV6 is supposed to replace IPV6 in a few years time especially because the latter faces possible address exhaustion. Additionally IPV6 have superior advantages which gives it an edge over IPV4. IPV6 offers extended and improved addressing mechanisms. It is possible to have far much large number of addresses which are more than 32 bits longer. Furthermore the auto configuration capabilities make it easy for network users and administrators to manage the networks. IPV6 provides more security option specifically through the use of mandatory IPSec support (Miyamoto, 2008). Quality of server is another strong advantage of IPV6 which is implemented through labeling specific traffics that are supposed to meet certain traffic configuration conditions.
Despite the marvelous features that IPV6 possess, there are some drawbacks associated with it. This is to a larger extent associated with the size of the packet header. "Its larger headers require more space in buffers and tables. The extension approach to headers can be an issue in hardware implementations because, except for the first header, information isn't located at a fixed offset from the start of the packet". (Wong, 2002). The whole process might take a lot of time, before the receiver can identify payload contents. Despite this drawback IPV6 is expected to be of massive success though there will be a lot of challenges when it comes to implementation issues.
A major issue that raises a lot of concern is concerned with the strategies to be adopted to move from IPV4 to IPV6. This is where a lot of costs are estimated both financially and in terms of network performance. Issues like address resolution presents the biggest challenge. There has to be some defined way of ensuring that users migrate from IPV4 to IPV6 without any effect on network performance, quality of service and most important loss of important user data. The problem of legacy equipment providers is also expected to raise some serious concerns. There has to be specific manufactures of network equipments that will sufficiently support IPV6.Network Address Translation (NAT) is one of the best approaches to counteract the above challenges.
IPV4 is an important protocol that has been in use for quite some time but due to the fact the available addresses might get limited, it is paramount that users of internet adopt IPV6. IPV6 is designed to solve most of the problems and limitations in IPV4. The challenging aspect is concerned with adoption techniques and how users will respond to the anticipated changes. The fact that the protocol is being used by some internet users indicates the readiness and the appropriateness of the technology.

Tuesday, 30 January 2018

Big data

Em tecnologia da informação, o termo Big Data refere-se a um grande conjunto de dados armazenados. Diz-se que o Big Data se baseia em 5 V's : velocidade, volume, variedade, veracidade e valor.[1]'[2][3][4]
Big Data é um termo amplamente utilizado na atualidade para nomear conjuntos de dados muito grandes ou complexos, que os aplicativos de processamento de dados tradicionais ainda não conseguem lidar. Os desafios desta área incluem: análise, captura, curadoria de dados, pesquisa, compartilhamento, armazenamento, transferência, visualização e informações sobre privacidade dos dados. Este termo muitas vezes se refere ao uso de análise preditiva e de alguns outros métodos avançados para extrair valor de dados, e raramente a um determinado tamanho do conjunto de dados. Maior precisão nos dados pode levar à tomada de decisões com mais confiança. Além disso, melhores decisões podem significar maior eficiência operacional, redução de risco e redução de custos.
A análise adequada de tais grandes conjuntos de dados permite encontrar novas correlações, como por exemplo: "tendências de negócios no local, prevenção de doenças, combate à criminalidade e assim por diante". Cientistas, empresários, profissionais de mídia e publicidade e Governos regularmente enfrentam dificuldades em áreas com grandes conjuntos de dados, incluindo pesquisa na Internet, finanças e informática de negócios. Os cientistas, por exemplo, encontram limitações no trabalho de e-Ciência, incluindo Meteorologia, Genômica, conectonomia, simulações físicas complexas, além de pesquisa biológica e ambiental.
Tais conjuntos de dados crescem em tamanho em parte porque são cada vez mais frequentes e numerosos, uma vez que os dados atualmente podem ser reunidos por dispositivos baratos de informação, tais como equipamentos de sensoriamento móveis, aéreos (sensoriamento remoto), logs de software, câmeras, microfones, leitor (RFID) de rádio-frequência de identificação e redes de sensores sem fio. Desta forma, a capacidade per-capita tecnológico do mundo para armazenar informações praticamente tem dobrado a cada 40 meses, desde a década de 1980. A partir de 2012, foram criados a cada dia 2,5 exabytes (2,5 × 1018 bytes) de dados. O desafio atual para as grandes empresas é determinar quem deve possuir grandes iniciativas de dados que atravessem toda a organização.
Sistemas de gerenciamento de banco de dados relacional, estatísticas da área de trabalho e pacotes de visualização, muitas vezes têm dificuldade em lidar com grandes volumes de dados, pois isto requer o trabalho de "software paralelo, rodando em dezenas, centenas ou até mesmo milhares de servidores". O que é considerado "Big Data" varia de acordo com as capacidades dos usuários e suas ferramentas. Assim, o que é considerado "grande" em um ano provavelmente se tornará usual nos anos posteriores. "Para algumas organizações, que têm acesso a centenas de gigabytes de dados pela primeira vez, isto pode desencadear uma necessidade de se reconsiderar as opções de gerenciamento de dados. Desta forma, na atualidade "o volume dos dados armazenados ou acessados torna-se uma consideração importante."

   

Definição de Big Data:

Resumido ao máximo a explicação, o Big Data é a quantidade enorme de informações nos servidores de bancos de dados (Microsoft SQL Server + Oracle MySQL, por exemplo) que funcionam dentro de diversos servidores de rede de computadores (Intel, HP, IBM, Dell,Cisco, Samsung, etc) utilizando um sistema operacional de rede (Microsoft Windows Server 2008 + Red Hat Linux, por exemplo), interligados entre si, que hoje em dia funcionam dentro de um sistema operacional Cloud Computing (Microsoft Windows Azure, por exemplo), cujas informações são acessadas pela internet por pessoas utilizando um computador comum (notebook, por exemplo) ou celular (smartphone), para ler essas informações ou para incluir mais informações dentro do banco de dados via Cloud Computing.
Cada ano que passa, essa informação Big Data, tende a aumentar cada vez mais. Para garantir a segurança da informação, e para proteger a privacidade das informações, hoje em dia existem várias técnicas modernas (ITIL + COBIT, por exemplo). E a proteção que existe desde a década de 80, os antivírus (Symantec Norton, McAfee, AVG, Avast, etc), Hoje em dia o banco de dados SQL existe dentro dos servidores dos provedores de internet, que fornecem o serviço de uso disponível aos clientes.
2 Exemplos de Big Data: Youtube: todos os videos disponíveis online estão armazenados em diversos servidores de banco de dados SQL. Wikipédia: todos os textos disponíveis online estão armazenados em diversos servidores de banco de dados SQL.
   

Mercado de trabalho:

As oportunidades de trabalho na área de estatística estão aumentando graças à proliferação de programas para análise de dados e seu uso, especialmente, na tomada de decisão com objetivos estratégicos como: políticas de governo, seleção de investimentos, gestão de empresas e negócios, etc.. O Big Data permite trabalhar com grandes volumes de dados, por vezes, não aceitos pelos grandes programas estatísticos. No Brasil existe da profissão de Estatístico, regulamentada pelo Decreto Federal nº 62497 de 1968[7]. Este profissional é treinado para trabalhar com estruturas de dados, em seu manuseio para extração de informação estratégica, nos métodos estatísticos de análise e em programação para sua análise estatística, de modo a se obter conclusões com margens de erro controladas para a tomada de decisões com base nos dados disponíveis. A IBM criou a Big Data University, que fornece certo conhecimento do Big Data. Existem na Internet, sites que oferecem plataformas de ensino à distância, comumente conhecidas como MOOCs, com cursos nas áreas de Big Data e de Ciência de Dados (Data Science, no original em inglês), nos quais pode-se estudar o seu conteúdo de forma gratuita ou pagar pelo certificado do curso. Os mais conhecidos são os sites do Coursera e o EDX.org, este último fruto de parceria entre as universidades americanas de Harvard e do MIT. No Brasil, o mercado para a área é promissor, sendo que muitas renomadas Universidades passaram a oferecer cursos de pós-graduação e MBAs ligados à área de Big Data, variando em sua maioria no tamanho da carga horária destinada à parte de negócios, componente importante na formação deste profissional, que precisará ter além das habilidades técnicas, a capacidade de apresentar as conclusões de suas análises e insights para um público leigo de forma simples, de forma a gerar valor para o negócio da empresa.

 


 

Monday, 29 January 2018

Software-Defined Networking

Software-Defined Networking (SDN) helps organizations accelerate application deployment and delivery, dramatically reducing IT costs through policy-enabled work-flow automation. SDN technology enables cloud architectures by providing automated, on-demand application delivery and mobility at scale. SDN enhances the benefits of data center virtualization, increasing resource flexibility and utilization and reducing infrastructure costs and overhead.
SDN accomplishes these business objectives by converging the management of network and application services into centralized, extensible orchestration platforms that can automate the provisioning and configuration of the entire infrastructure. Common, centralized IT policies bring together disparate IT groups and work flows. The result is a modern infrastructure that can deliver new applications and services in minutes, rather than the days or weeks required in the past.
SDN delivers speed and agility when deploying new applications and business services. Flexibility, policy, and programmability are the hallmarks of Cisco's SDN solutions, with a platform capable of handling the most demanding networking needs of today and tomorrow.

Concept

 

Software-defined networking (SDN) is an architecture purporting to be dynamic, manageable, cost-effective, and adaptable, seeking to be suitable for the high-bandwidth, dynamic nature of today's applications. SDN architectures decouple network control and forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services.
The OpenFlow protocol can be used in SDN technologies. The SDN architecture is:
  • Directly programmable: Network control is directly programmable because it is decoupled from forwarding functions.
  • Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
  • Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
  • Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
  • Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols.

                       
Originally, SDN focused solely on separation of the control plane of the network, which makes decisions about how packets should flow through the network from the data plane of the network, which actually moves packets from place to place. When a packet arrives at a switch in the network, rules built into the switch's proprietary firmware tell the switch where to forward the packet. The switch sends every packet going to the same destination along the same path, and treats all the packets the exact same way. In a classic SDN scenario, rules for packet handling are sent to the switch from a controller, an application running on a server somewhere, and switches (also known as data plane devices) query the controller for guidance as needed, and provide it with information about traffic they are handling. Controllers and switches communicate through a controller's south bound interface, usually OpenFlow, although other protocols exist.
Where a traditional network would use a specialized appliance such as a firewall or link-load balancer, an SDN deploys an application that uses the controller to manage data plane behavior. Applications talk to the controller though its north-bound interface. As of the end of 2014, there is no formal standard for the application interface of the controller to match OpenFlow as a general south-bound interface. It is likely that the OpenDaylight controller's northbound application program interface (API) may emerge as a defacto standard over time, given its broad vendor support.
                            software-defined networking uses an operation mode that is sometimes called adaptive or dynamic, in which a switch issues a route request to a controller for a packet that does not have a specific route. This process is separate from adaptive routing, which issues route requests through routers and algorithms based on the network topology, not through a controller.
With SDN, the administrator can change any network switch's rules when necessary -- prioritizing, de-prioritizing or even blocking specific types of packets with a very granular level of control. This is especially helpful in a cloud computing multi-tenant architecture, because it allows the administrator to manage traffic loads in a flexible and more efficient manner. Essentially, this allows the administrator to use less expensive commodity switches and have more control over network traffic flow than ever before.

Bring your own device (BYOD)

Bring your own device (BYOD)—also called bring your own technology (BYOT), bring your own phone (BYOP), and bring your own personal computer (BYOPC)—refers to the policy of permitting employees to bring personally owned devices (laptops, tablets, and smart phones) to their workplace, and to use those devices to access privileged company information and applications. The phenomenon is commonly referred to as IT consumerization
BYOD is making significant inroads in the business world, with about 75% of employees in high growth markets such as Brazil and Russia and 44% in developed markets already using their own technology at work. Surveys have indicated that businesses are unable to stop employees from bringing personal devices into the workplace. Research is divided on benefits. One survey shows around 95% of employees stating they use at least one personal device for work.
    
   

New trends 

  The proliferation of devices such as tablets and smartphones, which are now used by many people in their daily lives, has led to a number of companies, such as IBM, to allow employees to bring their own devices to work, due to perceived productivity gains and cost savings. The idea was initially rejected due to security concerns but more and more companies are now looking to incorporate BYOD policies, with 95% of respondents to a BYOD survey by Cisco saying they either already supported BYOD or were at least considering supporting it. Recent study by Enterprise CIO talks about BYOD and informs that incorporating a BYOD culture can boost productivity by 16% over a 40 hour week.friendly.


Definition:

BYOD, or Bring Your Own Device, implies company policies drawn out to enable employees to bring their personal mobile devices – including smartphones, laptops and tablets – to their place of work and also make use of them to access data and information exclusive to the company they work for. These policies can be drawn out by all, establishments irrespective of their field or industry.
                           BYOD is now emerging as the future of enterprise, as most employees make use of their personally-owned gadgets and technology while at office. In fact, some companies believe that this trend may actually make employees more productive, as they are more comfortable working with their own mobile devices, which they are most comfortable with. Enabling BYOD also helps employees perceive them as more progressive and worker-friendly.

  

Pros of BYOD

  • Companies adopting BYOD policies can save much money spent on purchasing high-end devices for their employees’ use. They can also rest assured that employees would take better care of these gadgets, as these are their own personal devices.
  • Employees are more comfortable handling their own gadgets rather than use unfamiliar technology provided by their company. This also makes them feel more in control in the office environment.


    IT support and management.

    Authorizing employee-owned devices in a corporate setting can be a challenge for the IT administrator, as it involves deciding on the role-based device commissioning and level of permissible access to corporate resources. Regulate employee access to corporate data Mobile Device Manager Plus, providing simple solutions to enroll devices based on ownership. The application also includes:
  • Separate group policies for BYOD and corporate-owned devices.
  • Support for iOS, Android, and Windows platforms.
  • Active Directory for device authentication.
  • Policy settings, such as managing Wi-Fi, corporate email accounts, and media options.
             

Containerization is key.

Separation of corporate and personal information in each device is the best possible way to manage and secure corporate data in a BYOD environment. Containerization is also ideal for the device owners, as keeping their personal data intact. Some of the key attributes of containerization, which is supported in Mobile Device Manager Plus, are as follows:
  • Enterprise data is stored in an encrypted container.
  • User experience and privacy are not compromised.
  • Regulated corporate data wipes for employees who leave the organization.
  • Containerization of official mobile apps to block outside apps from accessing corporate data.
 

Sunday, 28 January 2018

Software as a Service (SaaS)

Software as a service (SaaS) is a software distribution model in which a third-party provider hosts applications and makes them available to customers over the Internet. SaaS is one of three main categories of cloud computing, alongside infrastructure as a service (IaaS) and platform as a service (PaaS). 
  

Introduction

Software as a Service is a software distribution model in which application are hosted by a vendor or by a service provider and the vendor made it available to the client over a network. We can say that it is a software delivery method that can provides access to the software and its functions remotely as a web-based service, Software as Service allows the customers or organizations to access it business functionality at a cost usually that is lower than paying for licensed application. Software as service pricing is usually based on monthly fee, as well as, the software is hosted by a remotely, customer don't need to buy additional hardware. Software as Service also enables the organization to remove the need to handle the installation, set-up and often daily upkeep and maintenance. Software as Service is becoming an increasingly prevalent delivery model as underlying technologies that support webhosting and service oriented architecture mature and new development approaches like Ajax become popular.
SaaS is closely related to ASP and on Demand Computing software delivery models. in the software on demand model, the provider gives client or organization network based access to a single copy of an application created specifically for the software as service distribution.

CONCEPT

The concept of SaaS took hold at a time when information technology executives were fed up with the ballooning costs of packaged enterprise software. Company had to spend thousands of dollars just to buy one software license, as well as they also had to spend more dollars for the implement the software, that include the consulting fees, and the training costs, and some extra infrastructure that was required to run the software and maintenance fees. As a result, SaaS emerged from the wreckage of botched multimillion0-dollar CRM & CRP implementation as a radical alternative to the software licensing models. Software as a service is a speedier cheaper implementation and easier.
Enterprise Software Applications delivered as SaaS include business applications such as customer relationship management (CRM), web conferencing and collaboration applications, HR applications like talent management and payroll, enterprise resource management applications like ERP, supply chain management (SCM), product lifecycle management (PLM) and so on.

The key characteristics of SaaS are:

  • Software as Services is only rented for the purpose of use they are not owner.
  • This software's are installed at a central server rather than on the client machines. The customer can access to the application through the internet. The vendor is responsible for the proper maintenance and performance of the software.
  • The vendor provides the facilities such as maintenance, support & upgrades to the software from the server. The provider is also responsible for the up gradation of the software.

Situation

Analysis of the demand for the Software as a Service, here we are going to discuss whether sufficient demand for the specific product is prevailing in the market, and what are the advantages and disadvantages for using it.


SaaS removes the need for organizations to install and run applications on their own computers or in their own data centers. This eliminates the expense of hardware acquisition, provisioning and maintenance, as well as software licensing, installation and support. Other benefits of the SaaS model include:
Flexible payments: Rather than purchasing software to install, or additional hardware to support it, customers subscribe to a SaaS offering. Generally, they pay for this service on a monthly basis using a pay-as-you-go model. Transitioning costs to a recurring operating expense allows many businesses to exercise better and more predictable budgeting. Users can also terminate SaaS offerings at any time to stop those recurring costs.
Scalable usage: Cloud services like SaaS offer high scalability, which gives customers the option to access more, or fewer, services or features on-demand.

Automatic updates: Rather than purchasing new software, customers can rely on a SaaS provider to automatically perform updates and patch management. This further reduces the burden on in-house IT staff.
Accessibility and persistence: Since SaaS applications are delivered over the Internet, users can access them from any Internet-enabled device and location.
But SaaS also poses some potential disadvantages. Businesses must rely on outside vendors to provide the software, keep that software up and running, track and report accurate billing and facilitate a secure environment for the business' data. Providers that experience service disruptions, impose unwanted changes to service offerings, experience a security breach or any other issue can have a profound effect on the customers' ability to use those SaaS offerings. As a result, users should understand their SaaS provider's service-level agreement, and make sure it is enforced.
SaaS is closely related to the ASP (application service provider) and on demand computing software delivery models. The hosted application management model of SaaS is similar to ASP: the provider hosts the customer’s software and delivers it to approved end users over the internet.  In the software on demand SaaS model, the provider gives customers network-based access to a single copy of an application that the provider created specifically for SaaS distribution. The application’s source code is the same for all customers and when new features are functionalities are rolled out, they are rolled out to all customers. Depending upon the service level agreement (SLA), the customer’s data for each model may be stored locally, in the cloud or both locally and in the cloud.
   
      

Conclusion

Software as a service is a very different model than the traditional software license and maintenance and client server model. Software as service will be the way through most applications will be delivered in near future. We can say that technology innovations are the primary driver for SaaS adoption. it is an attractive delivery model for high volume and commoditized business processes in back-office banking, as well as SaaS does not have to be all or nothing value proposition, we can operate it in a hybrid model. Developments of Saas vary by region. while Europe, the Middle East and Africa respondents cited total cost of ownership as the main motivator, North America and Asia/Pacific participants focused on ease and speed of development. Although SaaS does not provide any guarantee to be less expensive than on-premises software and it also had some risks. In some organizations it is very difficult to relinquish control or trust third parties to manage their applications and data
A series of macro-trends is fundamentally changing the way businesses must operate. Globalization is changing the competitive landscape, and mobility is changing the way workers do their jobs. An explosion of consumer-oriented, on-demand services, led by Amazon.com and Apple's iTunes has taught people how easy it can be to access and share information or the goods and services they want.
These experiences, combined with the escalating competitive climate and challenges of managing an increasingly dispersed workforce, are forcing businesses of all sizes to re-think how they acquire and utilize software applications. Unwilling to continue to tolerate the operating inefficiencies and ongoing costs of traditional on-premise software products, a growing number of businesses are now adopting on-demand solutions to meet their business needs.
This has opened the door to an exciting new era of opportunity for organizations to leverage and build their own on-demand applications and a Pandora's Box of challenges for organizations trying to develop and deliver SaaS solutions in a cost effective fashion. In addition to designing a unique on-demand solution, they must build and deliver it in a scalable and secure fashion. For ISVs, this must be done without the benefit of the upfront revenues of a traditional, perpetual license model. Instead, the pay-as-you-go subscription pricing approach inherent in SaaS places significant financial constraints on aspiring SaaS vendors. Yet, they must still get to market quickly and scale their operations in order to keep pace with escalating competition and customer demands in the SaaS market.

Jaydev Unadkat becomes costliest Indian player at IPL auction day 2, Twitterati respond with memes and humour

After Jaydev Unadkat was sold at 11.5 crore to Rajasthan Royals, Twitter users got down to wondering what could be the reactions of other bowlers to hoping he does not "have a heart attack where ever he is" after becoming the costliest Indian player after the IPL auction. 

 

As the IPL auction frenzy continues, Jaydev Unadkat became the costliest Indian player after he was bought by Rajasthan Royals for 11.5 crore. The bid has left many confused and unsettled, especially if the reactions of Twitter users of Facebook are to go by. Unadkat became the costliest Indian player on the second day of the IPL 2018 Auction in Bengaluru. Twitter users were quit to respond to the news with memes and funny comebacks. Unadkat is, however, an experienced player in IPL and has taken a total of 24 wickets in 12 matches in 2017. He also, recently, played against Sri Lanka in the T20I series, taking four wickets in three matches. His performance earned him the Man-of-The-Series award, following which he was also a part of the T20I series against South Africa in the overseas.
From wondering what could be the reactions of other bowlers to hoping he does not “have a heart attack where ever he is”, this is how people on the micro-blogging site reacted to Unadkat becoming the costliest Indian player to be sold.
     
Gujarat Chief Minister Vijay Rupani today heaped praise on Jaydev Unadkat saying the left-arm pacer was pride of Saurashtra. Unadkat was the costliest Indian buy fetching a mind boggling INR 11.5 crore deal from the usually thrifty Rajasthan Royals on the final day of the IPL  auction in Bengaluru. Rupani congratulated Unadkat for being picked up by the Royals.
“Players like Jaydev (Unadkat) are pride of Saurashtra. I congratulate him and wish him to do well in the Indian Premier League and for the country,” Rupani told reporters here. 26-year-old Unadkat has played a Test, seven ODIs and four T20Is for India but in the last one year he has emerged as a parsimonious bowler in the shortest format.

Types of cloud services: IaaS, PaaS, SaaS

Most cloud computing services fall into three broad categories: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (Saas). These are sometimes called the cloud computing stack, because they build on top of one another. Knowing what they are and how they are different makes it easier to accomplish your business goals.

Infrastructure-as-a-service (IaaS)

The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—servers and virtual machines (VMs), storage, networks, operating systems—from a cloud provider on a pay-as-you-go basis. To learn more, see What is IaaS?

Platform as a service (PaaS)

Platform-as-a-service (PaaS) refers to cloud computing services that supply an on-demand environment for developing, testing, delivering and managing software applications. PaaS is designed to make it easier for developers to quickly create web or mobile apps, without worrying about setting up or managing the underlying infrastructure of servers, storage, network and databases needed for development. To learn more, see What is PaaS?

Software as a service (SaaS)

Software-as-a-service (SaaS) is a method for delivering software applications over the Internet, on demand and typically on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure and handle any maintenance, like software upgrades and security patching. Users connect to the application over the Internet, usually with a web browser on their phone, tablet or PC. To learn more, see What is SaaS?
   
Definition

Infrastructure as a Service (IaaS)


Infrastructure as a service (IaaS) is a form of cloud computing that provides virtualized computing resources over the internet. IaaS is one of the three main categories of cloud computing services, alongside software as a service (SaaS) and platform as a service (PaaS).

      

IaaS architecture and how it works

In an IaaS model, a cloud provider hosts the infrastructure components traditionally present in an on-premises data center, including servers, storage and networking hardware, as well as the virtualization or hypervisor layer.
The IaaS provider also supplies a range of services to accompany those infrastructure components. These can include detailed billing, monitoring, log access, security, load balancing and clustering, as well as storage resiliency, such as backup, replication and recovery. These services are increasingly policy-driven, enabling IaaS users to implement greater levels of automation and orchestration for important infrastructure tasks. For example, a user can implement policies to drive load balancing to maintain application availability and performance.
IaaS customers access resources and services through a wide area network (WAN), such as the internet, and can use the cloud provider's services to install the remaining elements of an application stack. For example, the user can log in to the IaaS platform to create virtual machines (VMs); install operating systems in each VM; deploy middleware, such as databases; create storage buckets for workloads and backups; and install the enterprise workload into that VM. Customers can then use the provider's services to track costs, monitor performance, balance network traffic, troubleshoot application issues, manage disaster recovery and more.
Any cloud computing model requires the participation of a provider. The provider is often a third-party organization that specializes in selling IaaS. Amazon Web Services (AWS) and Google Cloud Platform (GCP) are examples of independent IaaS providers. A business might also opt to deploy a private cloud, becoming its own provider of infrastructure services.
     
    

Platform as a Service (PaaS)

Platform as a service (PaaS) is a cloud computing model in which a third-party provider delivers hardware and software tools -- usually those needed for application development -- to users over the internet. A PaaS provider hosts the hardware and software on its own infrastructure. As a result, PaaS frees users from having to install in-house hardware and software to develop or run a new application.
        

PaaS architecture and how it works

PaaS does not typically replace a business's entire IT infrastructure. Instead, a business relies on PaaS providers for key services, such as application hosting or Java development.
A PaaS provider builds and supplies a resilient and optimized environment on which users can install applications and data sets. Users can focus on creating and running applications rather than constructing and maintaining the underlying infrastructure and services.
Many PaaS products are geared toward software development. These platforms offer compute and storage infrastructure, as well as text editing, version management, compiling and testing services that help developers create new software more quickly and efficiently. A PaaS product can also enable development teams to collaborate and work together, regardless of their physical location.
                             And about SaaS its there in the blog

What is cloud computing? A beginner's guide

Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you are billed for water or electricity at home.
                                                    Still foggy on how cloud computing works and what it is for? This beginner’s guide is designed to demystify basic cloud computing jargon and concepts and quickly bring you up to speed.

               

Uses of cloud computing

You are probably using cloud computing right now, even if you don’t realise it. If you use an online service to send email, edit documents, watch movies or TV, listen to music, play games or store pictures and other files, it is likely that cloud computing is making it all possible behind the scenes. The first cloud computing services are barely a decade old, but already a variety of organisations—from tiny startups to global corporations, government agencies to non-profits—are embracing the technology for all sorts of reasons. Here are a few of the things you can do with the cloud:
  • Create new apps and services
  • Store, back up and recover data
  • Host websites and blogs
  • Stream audio and video
  • Deliver software on demand
  • Analyse data for patterns and make predictions


Common Cloud Examples


The lines between local computing and cloud computing sometimes get very, very blurry. That's because the cloud is part of almost everything on our computers these days. You can easily have a local piece of software (for instance, Microsoft Office 365) that utilizes a form of cloud computing for storage (Microsoft OneDrive).
That said, Microsoft also offers a set of Web-based apps, Office Online, that are Internet-only versions of Word, Excel, PowerPoint, and OneNote accessed via your Web browser without installing anything. That makes them a version of cloud computing (Web-based=cloud).
Google Drive: This is a pure cloud computing service, with all the storage found online so it can work with the cloud apps: Google Docs, Google Sheets, and Google Slides. Drive is also available on more than just desktop computers; you can use it on tablets like the iPad or on smartphones, and there are separate apps for Docs and Sheets, as well. In fact, most of Google's services could be considered cloud computing: Gmail, Google Calendar, Google Maps, and so on.
Apple iCloud: Apple's cloud service is primarily used for online storage, backup, and synchronization of your mail, contacts, calendar, and more. All the data you need is available to you on your iOS, Mac OS, or Windows device (Windows users have to install the iCloud control panel). Naturally, Apple won't be outdone by rivals: it offers cloud-based versions of its word processor (Pages), spreadsheet (Numbers), and presentations (Keynote) for use by any iCloud subscriber. iCloud is also the place iPhone users go to utilize the Find My iPhone feature that's all important when the handset goes missing.
Amazon Cloud Drive: Storage at the big retailer is mainly for music, preferably MP3s that you purchase from Amazon, and images—if you have Amazon Prime, you get unlimited image storage. Amazon Cloud Drive also holds anything you buy for the Kindle. It's essentially storage for anything digital you'd buy from Amazon, baked into all its products and services.
Hybrid services like Box, Dropbox, and SugarSync all say they work in the cloud because they store a synced version of your files online, but they also sync those files with local storage. Synchronization is a cornerstone of the cloud computing experience, even if you do access the file locally.
Likewise, it's considered cloud computing if you have a community of people with separate devices that need the same data synced, be it for work collaboration projects or just to keep the family in sync. For more, check out the The Best Cloud Storage and File-Syncing Services for 2016.

Cloud Hardware

Right now, the primary example of a device that is completely cloud-centric is the Chromebook. These are laptops that have just enough local storage and power to run the Chrome OS, which essentially turns the Google Chrome Web browser into an operating system. With a Chromebook, most everything you do is online: apps, media, and storage are all in the cloud.

Or you can try a  ChromeBit, a smaller-than-a-candy-bar drive that turns any display with an HDMI port into a usable computer running Chrome OS.
Of course, you may be wondering what happens if you're somewhere without a connection and you need to access your data. This is currently one of the biggest complaints about Chrome OS, although its offline functionality (that is, non-cloud abilities) are expanding.
   

Saturday, 27 January 2018

Electronic skin

Electronic skin or e-skin is a thin electronic material that mimics human skin in one or more ways. Specifically, human skin can sense pressure and temperature, stretch, and can heal itself. Electronic skin aims to apply these functions to robotic and health applications
n February 2011, the Stanford team developed a stretchable solar cell that could be used to power their electronic skin. An accordion-like micro-structure allowed the cells to stretch up to 30% without damage.

Flexible array sensors:
Using organic transistors with a floating gate embedded inhybrid dielectrics that comprise
a 2-nanometer-thick molecularself-assembled monolayer and a 4-nanometer-thick plasma-grown metal oxide, a nonvolatile memory arrays on flexibleplastic substrates is prepared which is used in electronic skin.
The small thickness of the dielectrics allows nonvolatile,reversible threshold-voltage shift. By integrating a flexiblearray of organic floating gate transistors with a pressuresensitive rubber sheet, a sensor matrix that identifies thedistribution of applied mechanical pressure and stores the
analog sensor input as a two-dimensional image over longperiods of time is obtained
Robotic sensors implementing the E-skin techno:
Robots could soon sense heat and pressure through a flexible e-skin, incorporating a matrix of semiconducting sensors or tectile sensors as shown in the figure 7. A flexible electronic skin that can sense when something is too hot to handle or is being squeezed too hard could give robots an almost-human sense of touch. Robots have mastered picking and placing, welding, and similar tasks that can be precalibrated, but they cannot perform tasks that require a sense of touch, such as biotech wizards have engineered electronic skin that can sense touch, in a major step towards next-generation robotics. New electronic skin could give robots human-like touch. Robotics
has made tremendous strides in replicating the senses of sight and sound, but smell and taste are still lagging behind, and touch was thought to be the Impossible. 

Introduction

Electronics plays a very important role in developing simple devices used for any purpose. In every field electronic equipments are required. The best achievement as well as future example of integrated electronics in medical field is Artificial Skin. It is ultrathin electronics device attaches to the skin like a sick on tattoo which can measure electrical activity of heart, brain waves & other vital signals. Evolution in robotics is demanding increased perception of the environment. Human skin provides sensory perception of temperature, touch/pressure, and air flow.
Goal is to develop sensors on flexible substrates that are compliant to curved surfaces. Researcher’s objective is for making an artificial skin is to make a revolutionary change in robotics, in medical field, in flexible electronics. Skin is large organ in human body so artificial skin replaces it according to our need. Main objective of artificial skin is to sense heat, pressure, touch, airflow and whatever which human skin sense. It is replacement for prosthetic limbs and robotic arms. Artificial skin is skin grown in a laboratory.
There are various names of artificial skin in biomedical field it is called as artificial skin, in our electronics field it is called as electronic skin, some scientist it called as sensitive skin, in other way it also called as synthetic skin, some people says that it is fake skin. Such different names are available but application is same it is skin replacement for people who have suffered skin trauma, such as severe burns or skin diseases, or robotic applications & so on. An artificial skin has also been recently demonstrated at the University of Cincinnati for in-vitro sweat simulation and testing, capable of skin-like texture, wetting, sweat pore-density, and sweat rates
     

Architecture of e-skin

With the interactive e-skin, demonstration is takes place an elegant system on plastic that can be wrapped around different objects to enable a new form of HMI. Other companies, including Massachusetts-based engineering firm MC10, have created flexible electronic circuits that are attached to a wearer's skin using a rubber stamp. MC10 originally designed the tattoos, called Biostamps, to help medical teams measure the health of their patients either remotely, or without the need for large expensive machinery. Fig 2 shows the various parts that make up the MC10 electronic tattoo called the Biostamp. It can be stuck to the body using a rubber stamp, and protected using spray-on bandages. The circuit can be worn for two weeks and Motorola believes this makes it perfect for authentication purposes.

 Biostamp use high-performance silicon, can stretch up to 200 per cent and can monitor temperature, hydration and strain, among other medical statistics. Javey's study claims that while building sensors into networks isn't new, interactive displays; being able to recognize touch and pressure and have the flexible circuit respond to it is 'breakthrough'. His team is now working on a sample that could also register and respond to changes in temperature and light to make the skin even more lifelike.

Large-area ultrasonic sensor arrays that could keep both robots and humans out of trouble. An ultrasonic skin covering an entire robot body could work as a 360-degree proximity sensor, measuring the distance between the robot and external obstacles. This could prevent the robot from crashing into walls or allow it to handle our soft, fragile human bodies with more care. For humans, it could provide prosthetics or garments that are hyperaware of their surroundings. Besides adding multiple functions to e-skins, it’s also important to improve their electronic properties, such as the speed at which signals can be read from the sensors. For that, electron mobility is a fundamental limiting factor, so some researchers are seeking to create flexible materials that allow electrons to move very quickly.
Ali Javey and his colleagues at the University of California, Berkeley, have hadsome success in that area. They figured out how to make flexible, large-area electronics by printing semiconducting nanowires onto plastics and paper. Nanowires have excellent electron mobility, but they hadn’t been used in large-area electronics before. Materials like the ones Javey developed will also allow for fascinating new functions for e-skins. My team has developed electromagnetic coupling technology for e-skin, which would enable wireless power transmission.
Imagine being able to charge your prosthetic arm by resting your hand on a charging pad on your desk. In principle, any sort of conductor could work for this, but if materials with higher electron mobility are used, the transmission frequency could increase, resulting in more efficient coupling. Linking sensors with radio-frequency communication modules within an e-skin would also allow the wireless transmission of information from skin to computer—or, conceivably, to other e-skinned people.

Conclusions

The electronics devices gain more demand when they are compact in size and best at functioning. The Artificial Skin is one such device which depicts the beauty of electronics and its use in daily life. Scientists create artificial skin that emulates human touch. According to experts, the artificial skin is "smarter and similar to human skin." It also offers greater sensitivity and resolution than current commercially available techniques. Bendable sensors and displays have made the tech rounds before. We can predict a patient of an oncoming heart attack hours in advance. In future even virtual screens may be placed on device for knowing our body functions. Used in car dashboard, interactive wallpapers, smart watches.

Big data

Big data is data sets that are so voluminous and complex that traditional data processing application software are inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating and information privacy. There are three dimensions to big data known as Volume, Variety and Velocity.
Lately, the term "big data" tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that’s not the most relevant characteristic of this new data ecosystem." Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on." Scientists, business executives, practitioners of medicine, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet search, fintech, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics,connectomics, complex physics simulations, biology and environmental research.

Data sets grow rapidly - in part because they are increasingly gathered by cheap and numerous information-sensing Internet of things devices such as mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks. The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s; as of 2012, every day 2.5 exabytes (2.5×1018) of data are generated. By 2025, IDC predicts there will be 163 zettabytes of data. One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.

   Walking Into Big Data

Big Data is huge in amount, it is also captured at a fast rate and it is ordered or not ordered or some time amalgamation of the above. These factors create Big Data not easy to mine, manage and capture using conventional or traditional methods.

1.2 Aim/Objective
Perform Association Rule Mining and FP Growth on Big Data of E-Commerce Market to find frequent patterns and association rules among item sets present in database by using reduced Apriori Algorithm and reduced FP Growth Algorithm on top of Mahout (an open source library or java API) built on Hadoop Map Reduce Framework.
1.3 Motivation
Big Data refers to datasets whose amount is away from the ability of characteristic database software tools to analyze, store, manage and capture. This explanation is deliberately incorporates and subjective, a good definition of how large a dataset needs to be in order to be considered as big data i.e. we cannot define big data in terms of being big than a certain number of terabytes or thousands of gigabytes. We suppose that as technology advances with time the volume of datasets that would be eligible as big data will also rise. The definition can differ from sector to sector; it is depending up on which kind of software tools are normally present and what size of datasets are general in a particular industry. According to study, today big data in many sectors will range from a few dozen terabytes to thousands of terabytes.
' Velocity, Variety and Volume of data is growing day by day that is why it is not easy to manage large amount of data.
' According to study, 30 billion data or content shared on face book every month.
Issues/Problems while analysing Big Data:
Volume:
' According to analysis, every day more than one billion shares are traded on the New York Stock Exchange.
' According to analysis, every day Facebook saves two billion comments and likes
' According to analysis, every minute Foursquare manages more than two thousand Check-ins
' According to analysis, every minute Trans Union makes nearly 70,000 update to credit files
' According to analysis, every second Banks process more than ten thousand credit card transactions
Velocity:
We are producing data more rapidly than ever:
' According to study processes are more and more automated
' According to study people are more and more interacting online
' According to study systems are more and more interconnected
Variety:
We are producing a variety of data including:
' Social network connections
' Images
' Audio
' Video
' Log files
' Product rating comments
1.4 Background
Big data[5][6] is the term for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications.
Gartner, and now much of the industry, continue to use this "3Vs" model for describing big data [7]. In 2012, Gartner updated its definition as follows: Big data is the term that can be defined as high velocity, volume and variety of information assets that require new forms of processing to enable enhanced result building, nearby discovery and process optimization [8]. Additionally, a new V "Veracity" is added by some organizations to describe it.
Big data has evolved like a very important factor in the economic and technology field, as Similar to other important factors of invention like hard-assets & human-capital, high in numbers the present economic activity merely could n't take position exclusive of it. We can say that by looking at current position of the departments in the US economic companies have minimum of 200TB data storage on an average if considered(as double as the size of Wal-Mart's data warehouse of US-retailer in 1999) having one thousand workers approximately. In fact many departments have 1peta-byte 'PB' (in mean) data storage per organization. The growth of the big data will be continue to reach to an high extent, due to the modern technologies, platforms and their logical units and capabilities for handling large amount of data and also its large no. of upcoming users.

Utilization of Big Data will turn Out to Be a Key base of Competition and Growth for Individual Firms:
Usage of big-data has become an important medium for the leading firms to get better in their data handling. If we consider an example of a retail company, the company can increase its operating margin by 60% approximately by embracing their big data. The chief retailers like UK's TESCO and many more use big-data to keep market revenue-share in their pocket against from their local competitors.
The emergence of big-data also has capability to evolutes new growth opportunities for those companies who have both combine and industry analyzing data. Even the companies who have their data at the mid-point of large info data about the objectives and demands of their users, services, buyers, products & suppliers can be easily analyzed and captured using big-data.
The big-data usage in any firm or company can facilitate the healthy and more enhanced analyzing of data and its outcome, by deploying the big-data in the firm there will be lower prices of product, higher quality and healthy match between the company and customer's need. We can say that the step forward towards the acceptance of big data can improve customer surplus and acceleration of performance along all the companies.

Figure1.1: Types of data generated

Figure1.2: Impact of Big Data
Significance of Big Data:
Government sector:
' The administrator of the Obama has announced the idea of big-data R&d which is very useful to handle the several obstacles and problem which government is facing now a days. Their idea comprised of 84 big-data programs with 6 different departments.
' Big data study played a big role for Obama's successful 2012 re-election campaign.

Private sector:
' Ebay.com uses two data warehouse that consists of 7.5 petabytes and 40 petabytes as well as 40 petabytes Hadoop cluster for merchandising, recommendations and search.
' Everyday, Amazon.com also handles large amount of data (in millions) and its related back-end operations as well as requested queries from third part sellers on an average of more than half million.
' More than 1Million consumer-transactions processes every hour in Walmart, that is put into databases and estimation is done on data.
' Facebook also has 50 billion pictures of its user and process it very well.
' F.I.C.O. that is 'Falcon Credit Card Fraud Detection System' handles and secure 2.1-billion active a/c worlds widely.
' According to estimation, the size of the data stored of business and companies is increasing to the double in every 1.2 years.
' Windermere Real Estate uses various 'GPS signals' from nearly 100 million drivers which help new home seekers to determine their critical times to & from work around the times.