Friday, 8 July 2011

Create chocolates in shapes of your choice : 3D printer has come

If you're trying to woo that special someone, instead of just bringing them a box of ordinary chocolates, how about a box of chocolates that look like you? You're right, that would just be creepy, but chocolates formed into user-defined shapes are nonetheless now a possibility, thanks to a 3D chocolate printer developed at the University of Exeter.
Instead of being a product that people would buy and keep in their homes, the developers of the printer see it being owned by candy-making companies. Customers would submit their designs via a web interface (which is currently in development), then the company would print out the chocolates and deliver them. Less imaginative users could also view existing designs, and copy or modify them.
"What makes this technology special is that users will be able to design and make their own products" said lead researcher Dr. Liang Hao. "In the long term it could be developed to help consumers custom-design many products from different materials but we've started with chocolate as it is readily available, low cost and non-hazardous ... In future, this kind of technology will allow people to produce and design many other products such as jewelry or household goods. Eventually we may see many mass produced products replaced by unique designs created by the customer."
Like most 3D printers, the device works by depositing successive layers of the building material. Chocolate presented a challenge, however, as it requires precise heating and cooling cycles. The Exeter team therefore had to create new temperature and heating control systems, in order to keep the chocolate liquid enough to work with, yet cool enough that it would set upon deposition.
The University of Exeter is developing the 3D chocolate printer in collaboration with Brunel University and software developer Delcam. The project is funded by the Research Council UK Cross-Research Council Programme - Digital Economy and is managed by the Engineering and Physical Sciences Research Council.

Thursday, 7 July 2011

Jacking into your brain: Brain-machine interfaces (BMI)

Of all the ways that we have been aided by technology, forging a direct link between our brains and computers is the most intimate yet. Brain-machine interfaces (BMI) are poised to challenge our notions of identity, culpability and the acceptable limits of human enhancement.
BMIs work by eavesdropping on the electromagnetic signals generated by your brain. Invasive forms involve implanting electrodes into the grey matter or beneath the skull, and so far have been tested in a handful of paralysed people. Various groups are working on developing wheelchairs, robots and computers that can be controlled by brain signals alone. Krishna Shenoy of Stanford University is developing algorithms to improve the accuracy of implants for controlling a cursor on a screen. He believes BMIs will soon match or even surpass traditional ways to control computers.

Friday, 1 July 2011

Cloud Computing & Risk

Kamal K. Pandey
Cloud computing refers to the logical computational resources (data, software) accessible via a computer network (through WAN or Internet etc.), rather than from a local computer. The on-line service can be offered from a cloud provider or it could be private organization's own. In this case these technologies are regarded by some analysts as a technological evolution, or are seen as a marketing trap by others like Richard Stallman Users or clients can perform a task, such as word processing, mailing, with a client such as browser and with service provided through such cloud based computational resources. Since the cloud is the underlying delivery mechanism, cloud-based remote applications and services may support any type of software application or service in use today.
In the past, tasks such as word processing were not possible without the installation of software on a local computer. With the development of local area networks (LAN) and wider bandwidth, multiple CPUs and storage devices could be used to host services like word processing in a remotely managed datacenter. The Cloud computing takes away the installation and upgrades hassles and need for higher computing power from users and gives more control to the service providers on administration of the services. Consumers now routinely use data-intensive applications driven by cloud technology that were previously unavailable due to cost and deployment complexity. In many companies, employees and company departments are bringing a flood of consumer technology into the workplace, which raises legal compliance and security concerns for the corporation.
The term "software as a service" is sometimes used to describe programs offered through "The Cloud". A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is "The Cloud".
An analogy to explain cloud computing is that of public utilities such as electricity, gas, and water. Centralized and standardized utilities freed individuals from the difficulties of generating electricity or pumping water. All of the development and maintenance tasks involved in doing so was alleviated. With Cloud computing, this translates to a reduced cost in software distribution to providers still using hard mediums such as DVDs. Consumer benefits are that software no longer has to be installed and is automatically updated, but savings in terms of money is yet to be seen.
The principle behind the cloud is that any computer connected to the Internet is connected to the same pool of computing power, applications, and files. Users can store and access personal files such as music, pictures, videos, and bookmarks or play games or do word processing on a remote server rather than physically carrying around a storage medium such as a DVD or thumb drive. Even those using web-based email such as Gmail, Hotmail, Yahoo!, a company-owned email, or even an e-mail client program such as Outlook, Evolution, Mozilla Thunderbird, or Entourage are making use of cloud email servers. Hence, desktop applications that connect to internet-host email providers can also be considered cloud applications.
Cloud computing utilizes the network as a means to connect the user to resources that are based in the cloud, as opposed to actually possessing them. The cloud may be accessed via the Internet or a company network, or both. Cloud services may be designed to work equally well with Linux, Mac, and Windows platforms. With smartphones and tablets on the rise, cloud services have changed to allow access from any device connected to the Internet, allowing mobile workers access on-the-go, as in telecommuting, and extending the reach of business services provided by outsourcing.
The service provider may pool the processing power of multiple remote computers in "the cloud" to achieve the task, such as backing up of large amounts of data, word processing, or computationally intensive work. These tasks would normally be difficult, time consuming, or expensive for an individual user or a small company to accomplish, especially with limited computing resources and funds. With cloud computing, clients require only a simple computer, such as netbooks, which were created with cloud computing in mind, or even a smartphone, with a connection to the Internet, or a company network, in order to make requests to and receive data from the cloud, hence the term "software as a service" (SaaS). Computation and storage is divided among the remote computers in order to handle large volumes of both, thus the client need not purchase expensive hardware or software to handle the task. The outcome of the processing task is returned to the client over the network, depending on the speed of the Internet connection.
Cloud computing refers to the logical computational resources (data, software) accessible via a computer network (through WAN or Internet etc.), rather than from a local computer. The on-line service can be offered from a cloud provider or it could be private organization's own. In this case these technologies are regarded by some analysts as a technological evolution, or are seen as a marketing trap by others like Richard Stallman Users or clients can perform a task, such as word processing, mailing, with a client such as browser and with service provided through such cloud based computational resources. Since the cloud is the underlying delivery mechanism, cloud-based remote applications and services may support any type of software application or service in use today.
In the past, tasks such as word processing were not possible without the installation of software on a local computer. With the development of local area networks (LAN) and wider bandwidth, multiple CPUs and storage devices could be used to host services like word processing in a remotely managed datacenter. The Cloud computing takes away the installation and upgrades hassles and need for higher computing power from users and gives more control to the service providers on administration of the services. Consumers now routinely use data-intensive applications driven by cloud technology that were previously unavailable due to cost and deployment complexity. In many companies, employees and company departments are bringing a flood of consumer technology into the workplace, which raises legal compliance and security concerns for the corporation.
The term "software as a service" is sometimes used to describe programs offered through "The Cloud". A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is "The Cloud". An analogy to explain cloud computing is that of public utilities such as electricity, gas, and water. Centralized and standardized utilities freed individuals from the difficulties of generating electricity or pumping water. All of the development and maintenance tasks involved in doing so was alleviated. With Cloud computing, this translates to a reduced cost in software distribution to providers still using hard mediums such as DVDs. Consumer benefits are that software no longer has to be installed and is automatically updated, but savings in terms of money is yet to be seen.
The principle behind the cloud is that any computer connected to the Internet is connected to the same pool of computing power, applications, and files. Users can store and access personal files such as music, pictures, videos, and bookmarks or play games or do word processing on a remote server rather than physically carrying around a storage medium such as a DVD or thumb drive. Even those using web-based email such as Gmail, Hotmail, Yahoo!, a company-owned email, or even an e-mail client program such as Outlook, Evolution, Mozilla Thunderbird, or Entourage are making use of cloud email servers. Hence, desktop applications that connect to internet-host email providers can also be considered cloud applications.
Cloud computing utilizes the network as a means to connect the user to resources that are based in the cloud, as opposed to actually possessing them. The cloud may be accessed via the Internet or a company network, or both. Cloud services may be designed to work equally well with Linux, Mac, and Windows platforms. With smartphones and tablets on the rise, cloud services have changed to allow access from any device connected to the Internet, allowing mobile workers access on-the-go, as in telecommuting, and extending the reach of business services provided by outsourcing.
The service provider may pool the processing power of multiple remote computers in "the cloud" to achieve the task, such as backing up of large amounts of data, word processing, or computationally intensive work. These tasks would normally be difficult, time consuming, or expensive for an individual user or a small company to accomplish, especially with limited computing resources and funds. With cloud computing, clients require only a simple computer, such as netbooks, which were created with cloud computing in mind, or even a smartphone, with a connection to the Internet, or a company network, in order to make requests to and receive data from the cloud, hence the term "software as a service" (SaaS). Computation and storage is divided among the remote computers in order to handle large volumes of both, thus the client need not purchase expensive hardware or software to handle the task. The outcome of the processing task is returned to the client over the network, depending on the speed of the Internet connection.
Risk: 
Cloud computing's users are exposed to risks mainly associated with:
1) Information security and users' privacy
  1. Using a service of cloud computing to store data may expose a user to potential violation of privacy. Possession of a user's personal information is entrusted to a provider that can reside in a country other than the user's. In the case of a malicious behavior of the cloud provider, it could access the data in order to perform market research and user profiling.
  2. In the case of wireless cloud computing, the safety risk increases as a function of reduced security offered by wireless networks. In the presence of illegal acts like misappropriation or illegal appropriation of personal data, the damage could be very serious for the user, with difficulty to reach legal solutions and/or refunds if the provider resides in a state other than the user's country.
  3. In the case of industries or corporations, all the data stored in external memories are seriously exposed to possible cases of international or industrial espionage.
    2) International, political and economic problems
    1. May arise when the cloud's archives are located in a country other than those of the cloud's users. Crucial and intellectual productions and large amounts of personal informations are increasingly stored in private, centralized and partially accessible archives in the form of digital data. No guarantee is given to the users for a free future access.
    2. Issues are related with the location of the cloud's archives in a few rich countries. If not governed by specific international rules:
      • It could increase the digital divide between rich and poor nations (if the access to the stored knowledge will be not freely ensured to all).
      • Being the intangible property considered as a strategic factor for the modern knowledge-based economies it could favorite big corporations with "polycentric bodies" and "monocentric minds" only located in the "cloud's countries".
        3) Continuity of service
        Delegating their data-managing and processing to an external service, users are severely limited when these services are not operating. A malfunction also affects a large number of users at once because these services are often shared on a large network. As the service provided is supported by a high-speed Internet connection (both in download and upload), even in the event of an interruption of the line connection due to the user's Internet Service Provider (ISP) he or she will face a complete paralysis of the work.

        4) Data migration problems when changing the cloud provider
        Another issue is related with the data migration or porting when a user wants to change his cloud provider. There is no defined standard between the operators and such a change is extremely complex. The case of bankruptcy of the company of the cloud provider could be extremely dangerous for the users.

        Network Security Requirements and Attacks

        Kamal K. Pandey
        Network security is a complicated subject, historically only tackled by well-trained and experienced experts. However, as more and more people become “wired”, an increasing number of people need to understand the basics of security in a networked world. With the introduction of the computer, the need for automated tools for protecting files and other information stored on the computer became evident; this is especially the case for a shared system, such as a time-sharing system, and the need is even more acute for systems that can be accessed over a public telephone or data network. The generic name for the collection of tools designed to protect data and to thwart hackers is computer security. Although this is an important topic, it is beyond the scope of this article and will be dealt with only briefly.

        Network security is becoming more and more important as people spend more and more time connected. Compromising network security is often much easier than compromising physical or local security, and is much more common. Computer networks and other data systems are built from several different components each of which has characteristics special security. A secure computer network that need to be a problem security that must be considered in all sectors, in which the security chain complete very weak, weak as weakest point
        Security Requirements
        There are so many threats that exist to network security; we need to have a definition of security requirements. Computer and network security address three requirements:
        1. Secrecy. Requires that the information in a computer system only be accessible for reading by authorized parties. This type of access includes printing, displaying, and other forms of disclosure, including simply revealing the existence of an object.
        2. Integrity. Requires that computer system assets can be modified only by authorized parties. Modification includes writing, changing, changing status, deleting, and creating.
        3. Availability. Requires that computer system assets are available to authorized parties.
        The types of attacks on the security of a computer system or network are best characterized by viewing the function of the computer system as providing information. In general, there is a flow of information from a source, such as a file or a region of main memory, to a destination, such as another file or a user. The remaining parts of the figure show the following four general categories of attack:
        1. Interruption. An asset of the system is destroyed or becomes unavailable or unusable. This is an attack on availability. Examples include destruction of a piece of hardware, such as a hard disk, the cutting of a communication line, or the disabling of the file management system.
        2. Interception. An unauthorized party gains access to an asset. This is an attack on confidentiality. The unauthorized party could be a person, a program, or a computer. Examples include wiretapping to capture data in a network, and the illicit copying of files or programs.
        3. Modification. An unauthorized party not only gains access to but tampers with an asset. This is an attack on integrity. Examples include changing values in a data file, altering a program so that it performs differently, and modifying the content of messages being transmitted in a network.
        4. Fabrication. An unauthorized party inserts counterfeit objects into the system. This is an attack on authenticity. Examples include the insertion of spurious messages in a network or the addition of records to a file.
        In real practice there are some network attacks in network. base on its general categories of attack we can defne several topic about network attacks from this issue. Some Real attacks listed below :
        Land Attacks
        LAND attack is one kind of attack against a server / computer that is connected in a network that aims to stop the services provided by the server so that it occurs disruption of service or network computer. Type this kind of attack called a denial-of-Service (DoS) attack. LAND attack is categorized as a SYN attack (SYN attack) because it uses packet SYN (synchronization) at the time to do 3-way handshake to establish a relationship based TCP / IP. In a 3-way handshake to establish connection TCP / IP between the client and server, which occurred are as follows:
        • First, the client sends a SYN packet to the server / host to form a relationship TCP / IP between client and host. 
        • Second, the host replied by sending a SYN / ACK (Synchronization / Acknowledgements) back to the client.
        • Finally, the client replied by sending a packet ACK (Acknowledgement) back to the host. Thus, the relationship TCP / IP between the client and the host is established and data transfer can begin.
        In a LAND attack, the attacker computer that acts as a client sends a SYN packet have been engineered or dispoof to a server that is about to attack. SYN packets that have been engineered or dispoof contains source address (source address) and port number of origin (source port number) the exact same with a destination address (destination address) and destination port number (destination port number). Thus, when the host sends a SYN / ACK back to the client, then there is an infinite loop because the host actually sending a SYN / ACK is to itself. Host / server is not protected usually will crash or hang by a LAND attack is this. But now, LAND attack is not effective anymore because almost all systems are protected from these types of attacks through packet filtering or firewall.
        Ping of Death
        Ping of Death is an attack (Denial of Service) Denial of a server / computer that is connected in a network. These attacks take advantage of features in TCP / IP is packet fragmentation or solving packages, and also the fact that the size limit on the IP protocol packet is 65,536 bytes or 64 kilobytes. An attacker can send various ICMP packets (used to ping) which fragmented so that when the packets are put back together, then the total packet size exceeds limit of 65,536 bytes. A simple example is as follows: C: \ windows> ping-l 65 540.
        MSDOS commands on a ping or ICMP packet size of 65,540 bytes to a host / server. At the time an unprotected server receives a packet that exceeds the size limit specified in IP protocol, the server usually crashes, hangs, or reboot so that services become disturbed (Denial of Service). In addition, Ping of Death attack packets can be easily dispoof or engineered so it can not know the real origin of nowhere, and the attacker only needs to know IP address of the computer you want attacked. But today, Ping of Death attacks are no longer effective because all of the operating system has been upgraded and protected from these types of attacks like this. In addition, firewall can block all ICMP packets from the outside so that these types of attacks can not be done again.
        Teardrop
        Teardrop-type attack is a Denial of Service attack (DoS) against a server / computer connected in a network. Teardrop attack takes advantage of features in TCP / IP is packet fragmentation or solution packages, and weakness in the TCP / IP at the time of the packages is put back together fragmented. In a data transmission from one computer to another based network TCP / IP, then the data is broken down into several smaller packets on the computer of origin, and the packages are sent and then put back together on the destination computer. For example there are 4000 bytes of data to be sent from computer A to computer B. Thus, the data is broken down into 3 packages this way.
        On computer B, the third package is sorted and incorporated in accordance with the OFFSET in the TCP header of each package. Seen above that all three packages can be sorted and put back together into data 4000 bytes in size without problems.

        Security & Cryptography

        
        Kamal K. Pandey
        Securing the Internet presents great challenges and research opportunities. Potential applications such as Internet voting, universally available medical records, and ubiquitous e-commerce are all being hindered because of serious security and privacy concerns. The epidemic of hacker attacks on personal computers and web sites only highlights the inherent vulnerability of the current computer and network infrastructure.

        Adequately addressing security and privacy concerns requires a combination of technical, social, and legal approaches. Topics currently under active investigation in the department include mathematical modeling of security properties, implementation and application of cryptographic protocols, secure and privacy-preserving distributed algorithms, trust management, verification of security properties, and proof-carrying code. There is also interest in the legal aspects of security, privacy, and intellectual property, both within the department and in the world-famous Yale Law school, with which we cooperate. Some of these topics are described in greater detail below.

        James Aspnes is interested in problems involved with securing large distributed algorithms against disruption by untrustworthy participants. Using cryptographic techniques, it may be possible to allow intermediate results in a distributed algorithm to be certified independently of who provides them, reducing the problem of choosing which machines to trust. These issues become especially important in systems, such as peer-to-peer networks, where association with the system is voluntary and cannot be limited only to machines under the control of the algorithm designer Joan Feigenbaum is interested in the foundations of electronic commerce and in fundamental problems in complexity theory that are motivated by cryptology. One such problem is the power of “instance-hiding” computations. Can the owner of a private database use the superior processing power of one or more other machines (perhaps for a fee) without having to reveal the database to those machines? In a set of influential papers with Mart‚n Abadi, Don Beaver, Lance Fortnow, and Joe Kilian, Professor Feigenbaum showed:
        1) that instance-hiding computations are limited in power if the private-database owner can only consult a single other machine
        2) that they are extremely powerful if the owner can consult multiple other machines, and
        3) that instance hiding is closely related to some of the central themes of complexity theory, e.g., interactive provability, average vs. worst-case complexity, and the inherent communication costs of multiparty protocols.

        In another direction, Professor Feigenbaum founded the research area of “trust management” in collaboration with Matt Blaze and Jack Lacy. Emerging Internet services that use encryption on a mass-market scale require sophisticated mechanisms for managing trust. E-businesses will receive cryptographically signed requests for action and will have to decide whether or not to grant these requests. In centralized (and small-scale distributed) computing communities, an authorizer can make such a decision based on the identity of the person who signed the request. Global, internet-scale e-businesses, however, cannot rely on identities. Most merchants will have had no contact with a typical prospective customer prior to the first time they receive a request from him. Making authorization decisions in this type of environment requires formal techniques for specifying security policies and security credentials, rigorously determining whether a particular set of credentials proves that a request complies with a policy, and deferring trust to third-party credential issuers. The “PolicyMaker” and “KeyNote” trust-management systems, which she co-invented with Blaze, Lacy, John Ioannidis, and Angelos Keromytis, have had wide-ranging impact on large-scale distributed-authorization mechanisms.

        Michael Fischer is interested in security problems connected with Internet voting, and more generally in trust and security in multiparty computations. He has been developing an artificial society in which trust has a precise algorithmic meaning. In this setting, trust can be learned and used for decision making. Better decisions lead to greater social success. This framework allows for the development and analysis of some very simple algorithms for learning and utilizing trust that are easily implementable in a variety of settings and are arguably similar to what people commonly use in everyday life.

        Zhong Shao leads the FLINT group at Yale, which is developing a system for secure mobile code based on
        authentication logics, proof-carrying code, and type-based certifying compilers. Authentication logics are formal logics that allow one to reason about the properties of systems and protocols that verify the identity of users and decide whether or not to permit various operations. Modeling such systems provides the usual benefits of formal analysis: hidden assumptions are made explicit, redundant features are exposed, and flaws in the system may be found. Proof-carrying code (PCC) allows a code producer to provide a (compiled) program to a host, along with a formal proof of safety. The host can specify a safety policy and a set of axioms for reasoning about safety; the producer’s proof must be in terms of those axioms. Type-based certifying compilers are compilers that use static type information to help generate provably safe target code. These technologies fit together naturally and form the foundation for modern secure mobile-code system.

        Wednesday, 29 June 2011

        Making Websites Accessible and Secure

        Kamal K. Pandey
        Website CAPTCHA technology used to protect sites from hackers, bots and spammers is making those same sites inaccessible to many potential users, according to a survey of 150 typical online forums and other sites. CAPTCHA stands for "completely automated public Turing test to tell computers and humans apart." These are computer-generated checks that attempt to determine whether a visitor is a legitimate user or a potentially malicious computer script favoured by hackers and spammers. They commonly take a question and answer form or ask users to enter characters in an obfuscated image of text.

        CAPTCHAs have even become useful to the wider community allowing corrections to be made to scanned public documents, such as out-of print books, by crowd-sourcing the entries users type. There are also audio CAPTCHAs on my any sites.

        However, although they can help site owners block spam and malicious attacks, CAPTCHAs pose serious problems for the visually impaired and deaf web communities, say Joanne Kuzma and colleagues at the University of Worcester, England. The rise of online forums has benefited disabled users, who take advantage of better communications and more inclusion into society, the team asserts. But, the advent of CAPTCHAs has represented, on many occasions, an insurmountable technical barrier to many of those potential users.

        There have been several legal cases in which members of a particular community have taken website owners to court over such obstacles, citing equal opportunities law. Such cases have ensured that those and other companies begin to recognise and address the problems around accessibility. Indeed, many companies in the web 2.0 era have pre-empted the issues that might arise and ensured that their sites are accessible and usable by everyone.

        "Firms need to realise that it is legally and ethically important to provide fully accessibility to their systems," the researchers say. "With the increasing number of disabled people using these sites, firms can benefit economically by catering to their disabled constituents."

        In surveying 150 online forums, the team has identified many that typically exclude many potential users through inappropriate CAPTCHA implementation. They suggest that site owners should determine whether or not the security offered by implementing a CAPTCHA offers a sufficiently raised level of security to justify its use to the possible exclusion of some users. There are many ways to block spammers and to tighten security that can function perfectly well behind the scenes rather than overtly at the front-end of a site. If a site deems it essential to use a CAPTCHA, then they must ensure that various types are in place so that users of any ability have a choice including different types of character, audio, image recognition and logic-based tests so that no one is excluded except the spammers and hackers.