Saturday 31 March 2018

Microcontroller

A microcontroller is a compact integrated circuit designed to govern a specific operation in an embedded system. A typical microcontroller includes a processor,memory and input/output(I/O) peripherals on a single chip.
                                  Sometimes referred to as an embedded controller or microcontroller unit(MCU), microcontrollers are found in vehicles, robots,office machines, medical devices and other home appliances.
       Microcontroller features:
                                        A  Microcontroller's processor will vary by application. Options range from simple 4-bit,8-bit or 16-bit processors to more complex 32-bit and 64-bit processors. In terms of memory microcontrollers can use random access memory(RAM),flash memory,EPROM or EEPROM.
          When they first became available,microcontrollers solely used assembly language.
MCUs feature input and output pins to implement peripheral functions.Such functions include
analog-to-digital converters,liquid crystal display(LCD) controllers,real-time clock

Friday 16 March 2018

Internet of Things (IoT)

Internet of things directly refers to the use of intelligently connected devices and systems to average data gathered by embedded sensors and actuator in machine and other physical objects.

  THis is the link about interesting things that u must know about IOT dont miss this 


                                     .http://clk.press/0whiFK9G
         IoT expected to spread rapidly over coming  years and this convergence will unleash a a new dimensions services that include quality of life of consumers and productivity of enterprises

              IoT describes a system where items in physical world,sensors within or attached to this items, are connected to the internet via wireless and wired internet connections. These sensors can use various types of local area connections such as RFID,NFC, Wi-Fi, Bluetooth and Zigbee.
     Sensors can also have wide area connectivity such as GSM,GPRS,3G,LTE 
.Applications:
         Connect to both inanimate and living things:
                                       In early trails deployment t to IoT began with connecting industrial equipment. Today the vision of IoT has expanded to connect everything from industrial equipment to every day objects. For ex: Cow Tracking Project in Essex uses data collected from radio positioning  tags the monitor cows and to understand the behavior in the herd.
 
     Use sensors for data connection:
                                                      The physical objects that are connected will possess one or more sensors. Each sensors monitor a specific function such as location, motion, temperature, vibration etc.
  In IoT sensors will connect to each other and to systems that can understand or present information 
from the sensor's data feeds. Thes sensors will provide new information to the people or company.


  Smart Home Automation:
                            Here basically Using Application of IoT we can control the home appliances by connecting to the sensors, switches and programming in such a way that by hearing  the voice we can control the those appliances.
                         These are some of the applications of IoT and how it is connected. If you want any hiegher information then just click on link given below how this IOT are used in industries software etc   

Tuesday 13 March 2018

Linux Operating System

Just like Windows XP, Windows 7, Windows 8, and Mac OS X, Linux is an operating system. An operating system is software that manages all of the hardware resources associated with your desktop or laptop. To put it simply – the operating system manages the communication between your software and your hardware. Without the operating system (often referred to as the “OS”), the software wouldn’t function.
The OS is comprised of a number of pieces: 

 The Bootloader:
                     The software that manages the boot process of your computer. For most users, this will simply be a splash screen that pops up and eventually goes away to boot into the operating system.

  The kernel:
                       This is the one piece of the whole that is actually called “Linux”. The kernel is the core of the system and manages the CPU, memory, and peripheral devices. The kernel is the “lowest” level of the OS.
 
Daemons:
                  These are background services (printing, sound, scheduling, etc) that either start up during boot, or after you log into the desktop.

The Shell:
               You’ve probably heard mention of the Linux command line. This is the shell – a command process that allows you to control the computer via commands typed into a text interface. This is what, at one time, scared people away from Linux the most (assuming they had to learn a seemingly archaic command line structure to make Linux work). This is no longer the case. With modern desktop Linux, there is no need to ever touch the command line.

 Graphical Server:
                               This is the sub-system that displays the graphics on your monitor. It is commonly referred to as the X server or just “X”.

 Desktop Environment:
                               This is the piece of the puzzle that the users actually interact with. There are many desktop environments to choose from (Unity, GNOME, Cinnamon, Enlightenment, KDE, XFCE, etc). Each desktop environment includes built-in applications (such as file managers, configuration tools, web browsers, games, etc).

Hardware support:

 The Linux kernel is a widely ported operating system kernel, available for devices ranging from mobile phones to supercomputers; it runs on a highly diverse range of computer architectures, including the hand-held ARM-based iPAQ and the IBM mainframes System z9 or System z10. Specialized distributions and kernel forks exist for less mainstream architectures; for example, the ELKS kernel fork can run on Intel 8086 or Intel 80286 16-bit microprocessors, while the µClinux kernel fork may run on systems without a memory management unit. The kernel also runs on architectures that were only ever intended to use a manufacturer-created operating system, such as Macintosh computers (with both PowerPC and Intel processors), PDAs, video game consoles, portable music players, and mobile phones.

Android 8.0 Oreo and its features that you need to definitely check out

Three years ago Google introduced us to its new design language called Material Design. It was flat,fun, graphical and colorful. It was the visual change that ushered in the beginning of a new age for Android and its applications, one that focused less on the rapid expansion of Android’s feature set, and more on refining what already existed and paving the way for the future.
 Android 8.0 represents the current pinnacle of that effort, the very tip of the spear, fresh from Google’s workshop. Android 8.0 Oreo is as comprehensive a version of Android as there has ever been, and it is as stable, feature-rich and functional as ever.

It is 2x Faster:
 Get started on your favourite tasks more quickly with 2x the boot speed when powering up*
*boot time, as measured on Google Pixel.

 Background Limits:
Android Oreo helps minimise background activity in the apps that you use least. It's the super power you can't even see.

AutoFill :
 With your permission, AutoFill remembers your logins to get you into your favourite apps at supersonic speed.

Android Instant Apps :
Teleport directly into new apps straight from your browser, no installation needed.

Notification Dots
Android apps that have new notifications will now have a dot appear on the app icon to notify you. This isn't entirely new, something similar has been available on a few devices from Samsung, Asus, and HTC, among others, which indicated the number of unread notifications in each app. With Notification Dots, there is no counter but you can long press on the icon to peek at the notification right away. With Android Oreo, we will this feature roll out on all phones for a uniform experience.

Installing unknown apps got simpler
Tired of allowing apps to install from unknown sources? Oreo allows you to whitelist unknown app installations from Chrome, Google Drive, and Gmail, without needing to enable unknown sources. So if you were to download an APK from your favourite site or from gmail, it can be installed without issues. To keep your device safe Google Play Protect is available in Oreo by default, and periodically scans the phone for malware apps. It will alert you about rogue apps from time to time to keep your Android device out of danger.
                                                 So these are the some of the exciting features of Android 8 Oreo that you must try it once

Tuesday 13 February 2018

Artificial Intelligence

Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans.
Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.
Research associated with artificial intelligence is highly technical and specialized.
 The core problems of artificial intelligence include programming computers for certain traits such as:
  • Knowledge
  • Reasoning
  • Problem solving
  • Perception
  • Learning
  • Planning
  • Ability to manipulate and move objects
Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious approach.

AI Specialization

  • games playing: programming computers to play games against human opponents
  • expert systems: programming computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)
  • natural language: programming computers to understand natural human languages
  • neural networks: Systems that simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains
  • robotics: programming computers to see and hear and react to other sensory stimuli
  • Natural Language and Voice Recognition

    Artificial Intelligence includes Natural-language processing offers the greatest potential rewards because it would allow people to interact with computers without needing any specialized knowledge. You could simply walk up to a computer and talk to it. Unfortunately, programming computers to understand natural languages has proved to be more difficult than originally thought. Some rudimentary translation systems that translate from one human language to another are in existence, but they are not nearly as good as human translators. There are also voice recognition systems that can convert spoken sounds into written words, but they do not understand what they are writing; they simply take dictation. Even these systems are quite limited -- you must speak slowly and distinctly....
    Its a fast growing field with which computer, robotics,will acquire  self intelligence which may be good....

    Monday 12 February 2018

    Microprocessor

    A microprocessor is a programmable electronics chips that has computing and decision making capabilities to similar to central processing unit of computer.
                                The microprocessor is a semiconductor manufactured by VSLI  (Very Large Scale Integration) technique.It includes arrays,register, control circuits on single chip.
                 To perform a function or useful task we have to form a system by using microprocessor as a CPU  and interfacing memory, input and output devices to it. A system designed using a microprocessor as its CPU (Central Processing Unit) is called a microcomputer.
                        When your computer is turned on, the microprocessor first gets instuctions  from Basic input/output that comes with the computer as a part of its memory.
      

    Structure

    Internal structure of microprocessor depends on age of the design and the intended purpose of the microprocessor according to how it is useful.
          The complexity of integrated circuit (IC) is bounded by physical limitaions of  number of  transistors that can be put in one chip.
            If the complexity of circuit is more then coding will be easier here internal circuit need not to be handle by user. The users only need to know some basic about programming.
             
    Occasionally, physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example, carry and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each.

    Special-purpose designs

     

    A microprocessor is a general purpose system. Several specialized processing devices have followed from the technology:
    • A digital signal processor (DSP) is specialized for signal processing.
    • Graphics processing units (GPUs) are processors designed primarily for realtime rendering of 3D images. They may be fixed function (as was more common in the 1990s), or support programmable shaders. With the continuing rise of GPGPU, GPUs are evolving into increasingly general purpose stream processors (running compute shaders), whilst retaining hardware assist for rasterizing, but still differ from CPUs in that they are optimized for throughput over latency, and are not suitable for running application or OS code.
    • Other specialized units exist for video processing and machine vision.
    • Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. These tend to have different tradeoffs compared to CPUs.
    • Market statistics

      In 1997, about 55% of all CPUs sold in the world are 8-bit microcontrollers, over two billion of which were sold.
      In 2002, less than 10% of all the CPUs sold in the world were 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers.
       Most microprocessors are used in embedded control applications such as household appliances, automobiles, and computer peripherals.
       Taken as a whole, the average price for a microprocessor, microcontroller, or DSP is just over US$6 (equivalent to $8.16 in 2017).
      In 2003, about US$44 (equivalent to $58.53 in 2017) billion worth of microprocessors were manufactured and sold. Although about half of that money was spent on CPUs used in desktop or laptop personal computers, those count for only about 2% of all CPUs sold.
       The quality-adjusted price of laptop microprocessors improved −25% to −35% per year in 2004–2010, and the rate of improvement slowed to −15% to −25% per year in 2010–2013.
                                                   About ten billion CPUs were manufactured in 2008. Most new CPUs produced each year are embedded.
       

    Sunday 4 February 2018

    SWIFT The Programming Language:

    Swift is a apple's new programming language for native iOS application. It complements objective  C
    and it's a better time to learn the new language when C programming got on the benches.
                                                   Objective C  developers find lot of similarities with swift programming language such as strong typing, no reliance, and features like no header files, generic and more.
                                      The goal of the Swift project is to create the best available language for uses ranging from systems programming, to mobile and desktop apps, scaling up to cloud services. Most importantly,
                                                  Swift is designed to make writing and maintaining correct programs easier for the developer. To achieve this goal, we believe that the most obvious way to write Swift code must also be:
    *It should be safe for application purpose.
     *It should be faster than C based languages like C#,C++,etc to replace them
     *Closures unified with function pointers
     *   Tuples and multiple return values
     * Generics
     * Fast and concise iteration over a range or collection
     *Structs that support methods, extensions, and protocols
     *Functional programming patterns, e.g., map and filter
     *Powerful error handling built-in
     *Advanced control flow with do, guard, defer, and repeat keyword


    Projects

    The Swift language is managed as a collection of projects, each with its own repositories. The current list of projects includes:
    • The Swift compiler command line tool
    • The standard library bundled as part of the language
    • Core libraries that provide higher-level functionality
    • The LLDB debugger which includes the Swift REPL
    • The Swift package manager for distributing and building Swift source code
    • Xcode playground support to enable playgrounds in Xcode. 

      Linux

      Open-source Swift can be used on Linux to build Swift libraries and applications. The open-source binary builds provide the Swift compiler and standard library, Swift REPL and debugger (LLDB), and the core libraries, so one can jump right in to Swift development.

      New Platforms

      We can’t wait to see the new places we can bring Swift—together. We truly believe that this language that we love can make software safer, faster, and easier to maintain. We’d love your help to bring Swift to even more computing platforms
    •  CONCLUSION :
    •  Swift is very good programming language to develop our skills and to become successful in app development>
    •  .Please Share It....

    Saturday 3 February 2018

    Amazon Web Services

    Amazon Web Services (AWS) is a subsidiary of Amazon.com that provides on-demand cloud computing platforms to individuals, companies and governments, on a paid subscription basis with a free-tier option available for 12 months.
     The technology allows subscribers to have at their disposal a full-fledged virtual cluster of computers, available all the time, through the Internet. AWS's version of virtual computers have most of the attributes of a real computer including hardware (CPU(s) & GPU(s) for processing, local/RAM memory, hard-disk/SSD storage); a choice of operating systems; networking; and pre-loaded application software such as web servers, databases, CRM, etc.
                                                   Each AWS system also virtualizes its console I/O (keyboard, display, and mouse), allowing AWS subscribers to connect to their AWS system using a modern browser. The browser acts as a window into the virtual computer, letting subscribers log-in, configure and use their virtual systems just as they would a real physical computer
                                                 . They can choose to deploy their AWS systems to provide internet-based services for their own and their customers' benefit.
    The AWS technology is implemented at server farms throughout the world, and maintained by the Amazon subsidiary. Fees are based on a combination of usage, the hardware/OS/software/networking features chosen by the subscriber, required availability, redundancy, security, and service options.   

                                             Based on what the subscriber needs and pays for, they can reserve a single virtual AWS computer, a cluster of virtual computers, a physical (real) computer dedicated for their exclusive use, or even a cluster of dedicated physical computers.
                                     As part of the subscription agreement, Amazon manages, upgrades, and provides industry-standard security to each subscriber's system. AWS operates from many global geographical regions including 6 in North America.
    In 2017, AWS comprised more than 90 services spanning a wide range including computing, storage, networking, database, analytics, application services, deployment, management, mobile, developer tools, and tools for the Internet of Things.
                                              The most popular include Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3). Most services are not exposed directly to end users, but instead offer functionality through APIs for developers to use in their applications. Amazon Web

    Growth and profitability

    In November 2010, it was reported that all of Amazon.com's retail sites had been completely moved under the AWS umbrella. Prior to 2012, AWS was considered a part of Amazon.com and so its revenue was not delineated in Amazon financial statements.
                                 In that year industry watchers for the first time estimated AWS revenue to be over $1.5 billion.
    In April 2015, Amazon.com reported AWS was profitable, with sales of $1.57 billion in the first quarter of the year and $265 million of operating income.
                                        Founder Jeff Bezos described it as a fast-growing $5 billion business; analysts described it as "surprisingly more profitable than forecast". In October 2015, Amazon.com said in its Q3 earnings report that AWS's operating income was $521 million, with operating margins at 25 percent. AWS's 2015 Q3 revenue was $2.1 billion, a 78% increase from 2014's Q3 revenue of $1.17 billion. 2015 Q4 revenue for the AWS segment increased 69.5% y/y to $2.4 billion with 28.5% operating margin, giving AWS a $9.6 billion run rate.
                                . In 2015, Gartner estimated that AWS customers are deploying 10x more infrastructure on AWS than the combined adoption of the next 14 providers.
    In 2016 Q1, revenue was $2.57 billion with net income of $604 million, a 64% increase over 2015 Q1 that resulted in AWS being more profitable than Amazon's North American retail business for the first time. In the first quarter of 2016,
                         Amazon experienced a 42% rise in stock value as a result of increased earnings, of which AWS contributed 56% to corporate profits.
    With a 50% increase in revenues the past few years, AWS is expected to have $18 billion in annual revenue in 2017.
    Services’ offerings are accessed over HTTP, using the REST architectural style and SOAP protocol.
     Its been a top job for engineers to be looking for settle down, who knows cloud computing its a dream job
    PLEASE SHARE IT

    Swift Programming Language

    About Swift


    Swift is a fantastic way to write software, whether it’s for phones, desktops, servers, or anything else that runs code. It’s a safe, fast, and interactive programming language that combines the best in modern language thinking with wisdom from the wider Apple engineering culture and the diverse contributions from its open-source community. The compiler is optimized for performance and the language is optimized for development, without compromising on either.
    Swift is friendly to new programmers. It’s an industrial-quality programming language that’s as expressive and enjoyable as a scripting language. Writing Swift code in a playground lets you experiment with code and see the results immediately, without the overhead of building and running an app.
    Swift defines away large classes of common programming errors by adopting modern programming patterns:
    • Variables are always initialized before use.
    • Array indices are checked for out-of-bounds errors.
    • Integers are checked for overflow.
    • Optionals ensure that nil values are handled explicitly.
    • Memory is managed automatically.
    • Error handling allows controlled recovery from unexpected failures.
    Swift code is compiled and optimized to get the most out of modern hardware. The syntax and standard library have been designed based on the guiding principle that the obvious way to write your code should also perform the best. Its combination of safety and speed make Swift an excellent choice for everything from “Hello, world!” to an entire operating system.
    Swift combines powerful type inference and pattern matching with a modern, lightweight syntax, allowing complex ideas to be expressed in a clear and concise manner. As a result, code is not just easier to write, but easier to read and maintain as well.
    Swift has been years in the making, and it continues to evolve with new features and capabilities. Our goals for Swift are ambitious. We can’t wait to see what you create with it. 

    Version Compatibility

    This book describes Swift 4.0.3, the default version of Swift that’s included in Xcode 9.2. You can use Xcode 9.2 to build targets that are written in either Swift 4 or Swift 3.
    When you use Xcode 9.2 to build Swift 3 code, most of the new Swift 4 functionality is available. That said, the following features are available only to Swift 4 code:
    • Substring operations return an instance of the Substring type, instead of String.
    • The @objc attribute is implicitly added in fewer places.
    • Extensions to a type in the same file can access that type’s private members.
    A target written in Swift 4 can depend on a target that’s written in Swift 3, and vice versa. This means, if you have a large project that’s divided into multiple frameworks, you can migrate your code from Swift 3 to Swift 4 one framework at a time. 

    Features

    Swift is an alternative to the Objective-C language that employs modern programming-language theory concepts and strives to present a simpler syntax. During its introduction, it was described simply as "Objective-C without the C".
    By default, Swift does not expose pointers and other unsafe accessors, in contrast to Objective-C, which uses pointers pervasively to refer to object instances. Also, Objective-C's use of a Smalltalk-like syntax for making method calls has been replaced with a dot-notation style and namespace system more familiar to programmers from other common object-oriented (OO) languages like Java or C#. Swift introduces true named parameters and retains key Objective-C concepts, including protocols, closures and categories, often replacing former syntax with cleaner versions and allowing these concepts to be applied to other language structures, like enumerated types (enums)

    Syntactic sugar

    Under the Cocoa and Cocoa Touch environments, many common classes were part of the Foundation Kit library. This included the NSString string library (using Unicode), the NSArray and NSDictionary collection classes, and others. Objective-C provided various bits of syntactic sugar to allow some of these objects to be created on-the-fly within the language, but once created, the objects were manipulated with object calls. For instance, in Objective-C concatenating two NSStrings required method calls similar to this:

    NSString *str = @"hello,";
    str = [str stringByAppendingString:@" world"];

    Development and other implementations

    Since the language is open-source, there are prospects of it being ported to the web. Some web frameworks have already been developed, such as IBM's Kitura, Perfect and Vapor.
    An official "Server APIs" work group has also been started by Apple, with members of the Swift developer community playing a central role.
    A second free implementation of Swift that targets Cocoa, Microsoft's Common Language Infrastructure (.NET), and the Java and Android platform exists as part of the Elements Compiler from RemObjects Software.
       IF U LIKE IT PLEASE DO SHARE IT PLS....

    Data Scientists and Its categories

    In computer science and computer programming, a data type or simply type is a classification of data which tells the compiler or interpreter how the programmer intends to use the data. Most programming languages support various types of data, for example: real, integer or Boolean. A data type provides a set of values from which an expression (i.e. variable, function...) may take its values. The type defines the operations that can be done on the data, the meaning of the data, and the way values of that type can be stored.a type of value from which an expression may take its valueCategories of data scientists
    • Those strong in statistics: they sometimes develop new statistical theories for big data, that even traditional statisticians are not aware of. They are expert in statistical modeling, experimental design, sampling, clustering, data reduction, confidence intervals, testing, modeling, predictive modeling and other related techniques.
    • Those strong in mathematics: NSA (national security agency) or defense/military people working on big data, astronomers, and operations research people doing analytic business optimization (inventory management and forecasting, pricing optimization, supply chain, quality control, yield optimization) as they collect, analyse and extract value out of data.
    • Those strong in data engineering, Hadoop, database/memory/file systems optimization and architecture, API's, Analytics as a Service, optimization of data flows, data plumbing.
    • Those strong in machine learning / computer science (algorithms, computational complexity)
    • Those strong in business, ROI optimization, decision sciences, involved in some of the tasks traditionally performed by business analysts in bigger companies (dashboards design, metric mix selection and metric definitions, ROI optimization, high-level database design)
    • Those strong in production code development, software engineering (they know a few programming languages)
    • Those strong in visualization
    • Those strong in GIS, spatial data, data modeled by graphs, graph databases
    • Those strong in a few of the above. After 20 years of experience across many industries, big and small companies (and lots of training), I'm strong both in stats, machine learning, business, mathematics and more than just familiar with visualization and data engineering. This could happen to you as well over time, as you build experience. I mention this because so many people still think that it is not possible to develop a strong knowledge base across multiple domains that are traditionally perceived as separated (the silo mentality). Indeed, that's the very reason why data science was created.
    Most of them are familiar or expert in big data. 
    There are other ways to categorize data scientists, see for instance our article on Taxonomy of data scientists. A different categorization would be creative versus mundane. The "creative" category has a better future, as mundane can be outsourced (anything published in textbooks or on the web can be automated or outsourced - job security is based on how much you know that no one else know or can easily learn). Along the same lines, we have science users (those using science, that is, practitioners; often they do not have a PhD), innovators (those creating new science, called researchers), and hybrids. Most data scientists, like geologists helping predict earthquakes, or chemists designing new molecules for big pharma, are scientists, and they belong to the user category. 
      PLEASE SHARE IT!!

    Arduino

    Arduino, é uma plataforma de prototipagem eletrônica de hardware livre e de placa única,projetada com um microcontrolador Atmel AVR com suporte de entrada/saída embutido, uma linguagem de programação padrão, a qual tem origem em Wiring, e é essencialmente C/C++. O objetivo do projeto é criar ferramentas que são acessíveis, com baixo custo, flexíveis e fáceis de se usar por novatos e profissionais. Principalmente para aqueles que não teriam alcance aos controladores mais sofisticados e de ferramentas mais complicadas.
    Pode ser usado para o desenvolvimento de objetos interativos independentes, ou ainda para ser conectado a um computador hospedeiro. Uma típica placa Arduino é composta por um controlador, algumas linhas de E/S digital e analógica, além de uma interface serial ou USB, para interligar-se ao hospedeiro, que é usado para programá-la e interagi-la em tempo real. Ela em si não possui qualquer recurso de rede, porém é comum combinar um ou mais Arduinos deste modo, usando extensões apropriadas chamadas de shields A interface do hospedeiro é simples, podendo ser escrita em várias linguagens. A mais popular é a Processing, mas outras que podem comunicar-se com a conexão serial são: Max/MSP, Pure Data, SuperCollider, ActionScript e Java.

    Plataforma

    Sua placa consiste em um microcontrolador Atmel AVR de 8 bits, com componentes complementares para facilitar a programação e incorporação para outros circuitos. Um importante aspecto é a maneira padrão que os conectores são expostos, permitindo o CPU ser interligado a outros módulos expansivos, conhecidos como shields. Os Arduinos originais utilizam a série de chips megaAVR, especialmente os ATmega8, ATmega168, ATmega328 e a ATmega1280; porém muitos outros processadores foram utilizados por clones deles.[20]
    A grande maioria de placas inclui um regulador linear de 5 volts e um oscilador de cristal de 16 MHz (podendo haver variantes com um ressonador cerâmico), embora alguns esquemas como o LilyPad usem até 8 MHz e dispensem um regulador de tensão embutido, por ter uma forma específica de restrições de fator. Além de ser microcontrolador, o componente também é pré-programado com um bootloader, o que simplifica o carregamento de programas para o chip de memória flash embutido, em comparação com outros aparelhos que geralmente demandam um chip programador externo.

     

    Software

    O Arduino IDE é uma aplicação multiplataforma escrita em Java derivada dos projetos Processing e Wiring. É esquematizado para introduzir a programação a artistas e a pessoas não familiarizadas com o desenvolvimento de software. Inclui um editor de código com recursos de realce de sintaxe, parênteses correspondentes e identação automática, sendo capaz de compilar e carregar programas para a placa com um único clique. Com isso não há a necessidade de editar Makefiles ou rodar programas em ambientes de linha de comando.
    Tendo uma biblioteca chamada "Wiring", ele possui a capacidade de programar em C/C++. Isto permite criar com facilidade muitas operações de entrada e saída, tendo que definir apenas duas funções no pedido para fazer um programa funcional:
    • setup() – Inserida no início, na qual pode ser usada para inicializar configuração, e
    • loop() – Chamada para repetir um bloco de comandos ou esperar até que seja desligada.
    Habitualmente, o primeiro programa que é executado tem a simples função de piscar um LED. No ambiente de desenvolvimento, o usuário escreve um programa exemplo como este:

    Data Science

    W
    Data Science is also known as data driven science. Data science is a concept to unify statistics data analysis and its related studies in order to understand actual phenomena with data.It employs techniques and theories drawn from many fields within the broad areas of mathematics, statistics, information science, and computer science, in particular from the subdomains of machine learning, classification, cluster analysis, data mining, databases, and visualization.
       
         One way to consider data science is a evolutionary steps in indisciplinary fields like business analysis which is incorporate computer science,modeling ,statistics,mathematics.  At its core, data science involves using automated methods to analyze massive amounts of data and to extract knowledge from them. With such automated methods turning up everywhere from genomics to high-energy physics, data science is helping to create new branches of science, and influencing areas of social science and the humanities. The trend is expected to accelerate in the coming years as data from mobile sensors, sophisticated instruments, the web, and more, grows. In academic research, we will see an increasingly large number of traditional disciplines spawning new sub-disciplines with the adjective "computational" or “quantitative” in front of them. In industry, we will see data science transforming everything from healthcare to media.
       

    WHAT DATA SCIENCE MEANS FOR RESEARCH

    In virtually all areas of intellectual inquiry, data science offers a powerful new approach to making discoveries. By combining aspects of statistics, computer science, applied mathematics, and visualization, data science can turn the vast amounts of data the digital age generates into new insights and new knowledge.

    An Explosion of Data

    Data is increasingly cheap and ubiquitous. We are now digitizing analog content that was created over centuries and collecting myriad new types of data from web logs, mobile devices, sensors, instruments, and transactions. IBM estimates that 90 percent of the data in the world today has been created in the past two years.
    At the same time, new technologies are emerging to organize and make sense of this avalanche of data. We can now identify patterns and regularities in data of all sorts that allow us to advance scholarship, improve the human condition, and create commercial and social value. The rise of "big data" has the potential to deepen our understanding of phenomena ranging from physical and biological systems to human social and economic behavior.
    IF U LIKE IT PLEASE DO SHARE IT
    W

    Cyber Physical Systems(CPS)

    Cyber Physical System is a mechanism that is controlled by computer based algorithms. In CPS every component  is  thoroughly interviewed, it exhibits multiple tasks and use in interacting with other in myriad ways by its technology, .Application of CPS include smart grid, autonomous automobile systems, medical monitoring, robotic system, automatic pilot avionics.
        CPS involves transdisciplinary approaches merging theory of mechatronics  design process sciences.The process control is often reffered as embedded systems.  In embedded systems, the emphasis tends to be more on the computational elements, and less on an intense link between the computational and physical elements. CPS is also similar to the Internet of Things (IoT), sharing the same basic architecture; nevertheless, CPS presents a higher combination and coordination between physical and computational elements.

        Mobile Cyber Physical Systems
    In which physical system under study has inherent mobility, are prominent sub category of this tech.
    Examples for Mobile Cyber Physical System is mobile robotics  and the goods transferred by robots by using embedded  systems.
      Smartphone platforms make ideal mobile cyber-physical systems for a number of reasons, including:
    • Significant computational resources, such as processing capability, local storage
    • Multiple sensory input/output devices, such as touch screens, cameras, GPS chips, speakers, microphone, light sensors, proximity sensors
    • Multiple communication mechanisms, such as WiFi, 3G, EDGE, Bluetooth for interconnecting devices to either the Internet, or to other devices
    • High-level programming languages that enable rapid development of mobile CPS node software, such as Java, Objective C, JavaScript, ECMAScript or C#
    • Readily-available application distribution mechanisms, such as the Android Market and Apple App Store
    • End-user maintenance and upkeep, including frequent re-charging of the battery                                         Design : It is a challenge and the major difference in the designing between embedded and CPS ,because the are used and made by so many engineering branches Designing and deploying a cyber-physical production system can be done based on the 5C architecture (connection, conversion, cyber, cognition, and configuration). In the "Connection" level, devices can be designed to self-connect and self-sensing for its behavior. In the "Conversion" level, data from self-connected devices and sensors are measuring the features of critical issues with self-aware capabilities, machines can use the self-aware information to self-predict its potential issues. In the "Cyber" level, each machine is creating its own "twin" by using these instrumented features and further characterize the machine health pattern based on a "Time-Machine" methodology.   Also used as the Internet of Things (IoT), CPS are smart systems that have cyber technologies, both hardware and software, deeply embedded in and interacting with physical components, and sensing and changing the state of the real world. These systems have to operate with high levels of reliability, safety, security and usability since they must meet the rapidly growing demand for applications such as the smart grid, the next generation air transportation system, intelligent transportation systems, smart medical technologies, smart buildings and smart manufacturing. 2016 will be another milestone year in the development of these critical systems, which while currently being employed on a modest scale, don’t come close to meeting the demand.

    Nonvolatile Memory

    While nonvolatile memory sounds like a topic only of interest to tech geeks, it is actually huge for every person in the world who uses technology of any kind. As we become exponentially more connected, people need and use more and more memory. Nonvolatile memory, which is computer memory that retrieves information even after being turned off and back on, has been used for secondary storage due to issues of cost, performance and write endurance, as compared to volatile RAM memory that has been used as primary storage. In 2016, huge strides will be made in the development of new forms of nonvolatile memory, which promise to let a hungry world store more data at less cost, using significantly less power. This will literally change the landscape of computing, allowing smaller devices to store more data and large devices to store huge amounts of information.

    Definition - What does Non-Volatile Memory (NVM) mean?

    Non-volatile memory (NVM) is a type of computer memory that has the capability to hold saved data even if the power is turned off. Unlike volatile memory, NVM does not require its memory data to be periodically refreshed. It is commonly used for secondary storage or long-term consistent storage.
    Non-volatile memory is highly popular among digital media; it is widely used in memory chips for USB memory sticks and digital cameras. Non-volatile memory eradicates the need for relatively slow types of secondary storage systems, including hard disks.
    Non-volatile memory is also known as non-volatile storage.

    Volatile vs Non-Volatile Storage
    In any computer system, there are two types of storage, the primary or volatile storage and the secondary or non-volatile storage. The main difference between volatile and non-volatile storage is what happens when you turn-off the power. With non-volatile storage, as long as the data has already been written, it will remain for a considerable amount of time; typically hundreds of years. Volatile memory needs constant power in order to retain the stored data. Once the power goes out, the data is also lost instantly.
    The characteristics of non-volatile storage make it ideal for storing data for long term storage. Good examples of which include hard drives, memory cards, optical discs, and ROMs. Volatile storage serves a totally different purpose than non-volatile storage since it cannot be used to reliably store information. Instead, it is used by the system to temporarily hold information. This is because of the inherent speed volatile memory, which is typically thousands of times faster than most non-volatile storage. Faster is better as it prevents the creation of a bottleneck as processers get faster and faster.
    Because of their very different uses, there is also a major difference in terms of capacities. Volatile memory is quite expensive per unit so typical capacities of volatile memory tend to be lower; from MBs to a few GBs. In contrast, non-volatile storage is now reaching a few TB for hard drives, and in the range of GB for most solid state drives.

    Electrically addressed

    Electrically addressed semiconductor non-volatile memories can be categorized according to their write mechanism. Mask ROMs are factory programmable only, and typically used for large-volume products not required to be updated after manufacture. Programmable read-only memory can be altered after manufacture, but require a special programmer and usually cannot be programmed while in the target system. The programming is permanent and further changes require replacement of the device. Data is stored by physically altering (burning) storage sites in the device.

    Read-mostly devices

    An EPROM is an erasable ROM that can be changed more than once. However, writing new data to an EPROM requires a special programmer circuit. EPROMs have a quartz window that allows them to be erased with ultraviolet light, but the whole device is cleared at one time. A one-time programmable (OTP) device uses an EPROM chip but omits the quartz window in the package; this is less costly to manufacture. An electrically erasable programmable read-only memory EEPROM uses electrical signals to erase memory. These erasable memory devices require a significant amount of time to erase data and to write new data; they are not usually configured to be programmed by the processor of the target system. Data is stored by use of floating-gate transistors which require special operating voltages to be applied to trap or release electric charge on an insulated control gate for storage sites.

    Flash memory

    The flash memory chip is a close relative to the EEPROM; it differs in that it can only erase one block or "page" at a time. It is a solid-state chip that maintains stored data without any external power source. Capacity is substantially larger than that of an EEPROM, making these chips a popular choice for digital cameras and desktop PC BIOS chips.
    Flash memory devices use two different logical technologies—NOR and NAND—to map data. NOR flash provides high-speed random access, reading and writing data in specific memory locations; it can retrieve as little as a single byte. NAND flash reads and writes sequentially at high speed, handling data in small blocks called pages, however it is slower on read when compared to NOR. NAND flash reads faster than it writes, quickly transferring whole pages of data. Less expensive than NOR flash at high densities, NAND technology offers higher capacity for the same-size silicon.

    Virtual Reality and Augmented Reality:

    After many years in which the “reality” of virtual reality (VR) has been questioned by both technologists and the public, 2016 promises to be the tipping point, as VR technologies reach a critical mass of functionality, reliability, ease of use, affordability and availability. Movie studios are partnering with VR vendors to bring content to market. News organizations are similarly working with VR companies to bring immersive experiences of news directly into the home, including live events. And the stage is set for broad adoption of VR beyond entertainment and gaming — to the day when VR will help change the physical interface between man and machine, propelling a world so far only envisioned in science fiction. At the same time, the use of augmented reality (AR) is expanding. Whereas VR replaces the actual physical world, AR is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input, such as sound, video, graphics or GPS data. With the help of advanced AR technology (e.g., adding computer vision and object recognition), the information about the surrounding real world of the user becomes interactive and can be manipulated digitally.

    Augmented and virtual reality have one big thing in common. They both have the remarkable ability to alter our perception of the world. Where they differ, is the perception of our presence.
    Virtual reality is able to transpose the user. In other words, bring us some place else. Through closed visors or goggles, VR blocks out the room and puts our presence elsewhere.
    Oculus Rift, Samsung Gear VR, Google Cardboard, these are names you may have heard about by now. But if you haven't tried virtual reality since that one arcade in the 80's, be ready to be blown away by how far it's come.
    Putting a VR headset over your eyes will leave you blind to the current world, but will expand your senses with experiences within. You might even find yourself on top of Mount Kilimanjaro. The immersion is quite dramatic, with some users reporting feelings of movement as they ascend a staircase or ride a rollercoaster within the virtual environment.
    Augmented reality however, takes our current reality and adds something to it. It does not move us elsewhere. It simply "augments" our current state of presence, often with clear visors. Seen below, Samsung is near ready to introduce its Monitorless AR glasses, which would connect to phones or PCs via WIFI and replace the screen on those devices.

    What's Hot in Augmented Reality?

    When Microsoft first demoed HoloLens at Build 2015, they stole the show. HoloLens created waves in the ocean of augmented reality, painting the most groundbreaking picture of what is to come in the ever expanding world of AR.
    Microsoft is essentially injecting interactive holograms into our world to bridge the gap between your PC and your living room. Using HoloLens, you can literally surround yourself with your Windows apps. From a marketers perspective, this becomes one more, intensely immersive and promising way, to infiltrate our audience's homes.
    Cramer was fortunate to be one of the first agencies to receive the development edition of the HoloLens and the experiential future for our clients is already looking brighter. Using what we've learned experimenting with AR technology, we've already started building applications for product demos and more.
    In 2016, the world witnessed augmented reality take center stage in the form of Pokemon Go. The viral sensation that got Pikachu and Charizard out of the Gameboy and onto your front lawn, whether you wanted them there or not! This was the first major example of AR finding mass market acceptance and infiltrating our daily lives.
    Virtual and augmented realities in 2017 are already making dramatic leaps forward as startups find ways to introduce smell and touch to expand your sensory experiences. Technology company Immersion has introduced TouchSense Force, using haptic feedback to bring player's hands into VR worlds, and researchers at Stanford University’s Virtual Human Interaction Lab are having to resist eating foam doughnuts as they experiment with adding scent to VR.
    Also, beyond the obvious media and entertainment applications for AR/VR technologies, design and engineering companies the likes of Solidworks are demonstrating their commitment to immersive design with AR and VR related partnerships, including NVIDIA, Microsoft, Lenovo, and HTC Vive.

    The State of the AR/VR Adoption Rate

    While both augmented reality and virtual reality are gaining speed, and are more relevant in our current marketplace than ever before as millions of users hunt Pokemon and Oculus Rift becomes a consumer ready device, they are still more than anything a toy for a small minority of marketers and tech enthusiasts.
    The reason is because both are hindered by our ability to render 3D environments in real-time. AR less so, because the environment already exists and you are just adding onto it, but the problem with creating high resolution, life-like objects, still persists.
    We can equate this back to early video games. Take the Nintendo N64 for example. 007: GoldenEye was a remarkable game for its time - and still has a major fan base today - but it has a very low polygon count. A polygon is the most basic form of 3D, and the more polygons that make up an image, the higher the 3D resolution.
    Now, games have polygon counts in the billions, and they are only getting better. Trailers for new games these days are looking more and more like movies than games, and that bodes well for the future of VR and AR experiences.

    Android(operating system)

    There are different mobiles and computers operating systems like different models of computers and mobiles. Android is one of the operating systems using by the smart phones. Android is not simply an operating system rather hardware and programming languages are also use this Android Technology.
    Android was invented by an anonymous company but later on Google has taken its copyrights reserved and is now doing further development in this technology. It is Linux based technology that uses Unix as an operating system. Linux is the most recent technology in the field of communication and computing. It is therefore Android is demanded by most of the users to have it their operating system. However, it is also a fact that Google is offering an open choice for the users to modify and add any new application without even bring on the notice of Google. Anyone can upload a new application on the Android platform as App Store to either free or payable. These application uploads by the users can be easily download by the users and enjoy more features like additional games, interactive media and business plan.
    Android technology is open to use by anyone who wants to develop applications as it promotes the user to add new ideas by using the programming code accessible by them. The flexibility of Android technology makes it more convenient to the operating system to have this as a base for smartphones. The only requirement is software development kit availability to bring any change in it.
    There are two common versions of Android as one is cupcake in which sliding physical keyboard is present in the phone and second is HTC EVO which is totally touch screen operable. It allows user to physically contact the touch screen to perform different functions. Android avoids multitasking yet interface is quite good and user friendly.


    Interface

    Android's default user interface is mainly based on direct manipulation, using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching, and reverse pinching to manipulate on-screen objects, along with a virtual keyboard. Game controllers and full-size physical keyboards are supported via Bluetooth or USB. The response to user input is designed to be immediate and provides a fluid touch interface, often using the vibration capabilities of the device to provide haptic feedback to the user. Internal hardware, such as accelerometers, gyroscopes and proximity sensors are used by some applications to respond to additional user actions, for example adjusting the screen from portrait to landscape depending on how the device is oriented, or allowing the user to steer a vehicle in a racing game by rotating the device, simulating control of a steering wheel.




    The main hardware platform for Android is the ARM (ARMv7 and ARMv8-A architectures), with x86, MIPS and MIPS64, and x86-64 architectures also officially supported in later versions of Android. The unofficial Android-x86 project provided support for the x86 architectures ahead of the official support. MIPS architecture was also supported before Google did. Since 2012, Android devices with Intel processors began to appear, including phones and tablets. While gaining support for 64-bit platforms, Android was first made to run on 64-bit x86 and then on ARM64. Since Android 5.0 "Lollipop", 64-bit variants of all platforms are supported in addition to the 32-bit variants.
    Requirements for the minimum amount of RAM for devices running Android 7.1 range from in practice 2 GB for best hardware, down to 1 GB for the most common screen, to absolute minimum 512 MB for lowest spec 32-bit smartphone. The recommendation for Android 4.4 is to have at least 512 MB of RAM, while for "low RAM" devices 340 MB is the required minimum amount that does not include memory dedicated to various hardware components such as the baseband processor. Android 4.4 requires a 32-bit ARMv7, MIPS or x86 architecture processor (latter two through unofficial ports), together with an OpenGL ES 2.0 compatible graphics processing unit (GPU). Android supports OpenGL ES 1.1, 2.0, 3.0, 3.1 and as of latest major version, 3.2 and Vulkan. Some applications may explicitly require a certain version of the OpenGL ES, and suitable GPU hardware is required to run such applications.
    Android devices incorporate many optional hardware components, including still or video cameras, GPS, orientation sensors, dedicated gaming controls, accelerometers, gyroscopes, barometers, magnetometers, proximity sensors, pressure sensors, thermometers, and touchscreens. Some hardware components are not required, but became standard in certain classes of devices, such as smartphones, and additional requirements apply if they are present. Some other hardware was initially required, but those requirements have been relaxed or eliminated altogether. For example, as Android was developed initially as a phone OS, hardware such as microphones were required, while over time the phone function became optional.Android used to require an autofocus camera, which was relaxed to a fixed-focus camera if present at all, since the camera was dropped as a requirement entirely when Android started to be used on set-top boxes.
    In addition to running on smartphones and tablets, several vendors run Android natively on regular PC hardware with a keyboard and mouse. In addition to their availability on commercially available hardware, similar PC hardware-friendly versions of Android are freely available from the Android-x86 project, including customized Android 4.4. Using the Android emulator that is part of the Android SDK, or third-party emulators, Android can also run non-natively on x86 architectures. Chinese companies are building a PC and mobile operating system, based on Android, to "compete directly with Microsoft Windows and Google Android". The Chinese Academy of Engineering noted that "more than a dozen" companies were customising Android following a Chinese ban on the use of Windows 8 on government PCs.

    Development

    Android is developed by Google until the latest changes and updates are ready to be released, at which point the source code is made available to the Android Open Source Project (AOSP), an open source initiative led by Google. The AOSP code can be found without modification on select devices, mainly the Nexus and Pixel series of devices. The source code is, in turn, customized and adapted by original equipment manufacturers (OEMs) to run on their hardware. Also, Android's source code does not contain the often proprietary device drivers that are needed for certain hardware components. As a result, most Android devices, including Google's own, ultimately ship with a combination of free and open source and proprietary software, with the software required for accessing Google services falling into the latter category.