Tag: network

Nokia does better than expected

Gold-Rush-Eating-boots-N_54Former rubber boot maker Nokia has reported a better-than-expected quarterly profit thanks mostly to the fact it bought Alcatel-Lucent and slashed its costs.

Nokia and its rivals, Ericsson and Huawei, are not having a good time as telecom operators’ demand for faster 4G mobile broadband equipment has peaked, and upgrades to next-generation 5G equipment are still years away.

Fourth-quarter group earnings before interest and taxes fell 27 percent from a year ago to $1.01 billion, but about 25 percent better than the cocaine nose jobs of Wall Street predicted.

The networks unit’s sales in the quarter fell 14 percent, more than expected, but its operating margin came in at 14.1 percent, ahead of a market forecast of 11.7 percent.

Nokia said that while networks sales were set to decline further this year, profitability could improve from a 2016 margin of 8.9 percent.

Chief Executive Rajeev Suri said in a statement that he was disappointed with Nokia’s topline development in 2016, he expected its performance to improve in 2017. He saw potential for margin expansion in 2017 and beyond, as market conditions improve and sales transformation programmes gain traction.

Still in the current market, Nokia’s results are strong.  Nokia bought Alcatel-Lucent last year in response to industry changes and is currently axing thousands of jobs as it seeks to cut 1.2 billion euro of annual costs by 2018.

Nokia was caught out by the rise of smartphones and ended up selling its handset business to Microsoft in 2014, leaving it with the networks business and a portfolio of technology patents.

NHS email system borked by one idiot and 120 pedants

face palmThe NHS’s email system is under pressure after one idiot decided to send an email to everyone.

More than 1.2 million employees are currently trapped in a “reply-all” email hell.

To make matters worse, the email was just a test but it prompted a series of reply-all responses from annoyed recipients going out to all 1 million plus employees of the organisations.

The difficulty is that people cannot resist emailing replies to the thing to tell them to stop emailing, asking what is going on or asking to be removed from the mailing list.
So far there had been at least 120 replies so far — meaning that more than 140 million needless emails have been sent across the NHS’s network by pedants thinking they are doing the right thing.

Apparently, the network is running like an asthmatic ant with a heavy load of shopping.

The NHS Pensions department has resorted to Twitter to warn that if people need to contact it by email please be aware that there may be delays in responding due to an issue currently affecting all NHS mail.

Smartphone data subscriptions to double

mobileEricsson’s crack team of Tarot card readers has predicted that global subscriptions for smartphones will more than double by 2020.

It thinks that mobile data traffic to ninefold and there will be 6.1 billion smartphone subscriptions globally by the end of 2020, up from 2.6 billion in 2014.

The son of Eric said in his Mobility Report:”Advanced mobile technology will be globally ubiquitous by 2020 with 70 percent of people using smartphones and 90 percent covered by mobile broadband networks.”

Ericsson said video is expected to increase its share of total mobile traffic in 2020 to 60 percent, up from an earlier projection of 55 percent and compared to around 45 percent in 2014.

Of course there is no indication that the world’s networks will stand up to that sort of hammering.

Boffins speed up Wi-Fi by 10 times

technic, funk, man at short-wave receiver, 1961, 1960s, 60s, 20th century, historic, historical, radio operator, radio operatorsResearchers at Oregon State University emerged from their smoke filled labs with a technology that can increase the bandwidth of Wi-Fi systems by 10 times.

The technology, which uses LED lights, can be integrated with existing Wi-Fi systems to reduce bandwidth problems in crowded locations, such as airport terminals or coffee shops.

LED technology developments have made it possible to modulate the LED light rapidly, meaning that a “free space” optical communication system is possible.

The system uses inexpensive components.

The prototype, called Wi-FO, uses LEDs that are beyond the visual spectrum for humans and creates an invisible cone of light about one metre square in which the data can be received. To address the problem of a small area of usability, the researchers created a hybrid system that can switch between several LED transmitters installed on a ceiling, and the existing Wi-Fi system.

Thinh Nguyen, an OSU associate professor of electrical and computer engineering said the Wi-FO system could be easily transformed into a marketable product, and he was looking for a company that is interested in further developing and licensing the technology.

The system can potentially send data at up to 100 megabits per second. Although some current Wi-Fi systems have similar bandwidth, it has to be divided by the number of devices, so each user might be receiving just five to 10 megabits per second, whereas the hybrid system could deliver 50-100 megabits to each user.

In a home where telephones, tablets, computers, gaming systems, and televisions may all be connected to the internet, increased bandwidth would eliminate problems like video streaming that stalls and buffers.

The receivers are small photodiodes that cost less than a dollar each and could be connected through a USB port for current systems, or incorporated into the next generation of laptops, tablets, and smartphones.

A patent has been secured on the technology, and a paper was published in the 17th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems.

Researchers improve wi-fi

Researchers at the swanky US university of Stanford have worked out a way of improving wi-fi reception.

One of the problems of wi-fi is that it can be buggered up when you have many people packed into a flat or office building.

However, researchers at Stanford claim to have found a way to turn crowding into an advantage.

Using a dorm on the Stanford campus, they built a single, dense wi-fi infrastructure that each resident can use and manage like their own private network.

Dubbed BeHop, the system can be centrally managed for maximum performance and efficiency while users still assign their own SSIDs (service set identifiers), passwords and other settings.

Yiannis Yiakoumis, a Stanford doctoral student who presented a paper at the Open Networking Summit this week said that the whole thing can be managed with cheap consumer-grade access points and software-defined networking.

Each household installs its own wi-fi network with a wired broadband link out to the Internet. Each of those networks may be powerful enough to give good performance under optimal circumstances within the owner’s unit, but it may suffer from interference with all the other privately run networks next door.

Yiakoumis and his mates built a shared network of access point using home units provided by NetGear. They modified the firmware of those APs, and using SDN, they virtualised the private aspects of the network.

Residents named and secured their own virtual networks as if they had bought and plugged in a router in their own rooms. 

Wi-fi networks can get sick

Researchers at the University of Liverpool have created a wi-fi virus which spreads through populated areas as efficiently as the common cold spreads between humans.

The team designed and simulated an attack by a virus, it dubbed “Chameleon”. It found it spread quickly between homes and businesses, but it was able to avoid detection and identify the points at which wi-fi access is least protected by encryption and passwords.

Fortunately the wi-fi attack was just a computer simulation, but researchers from the University’s School of Computer Science and Electrical Engineering and Electronics, found that “Chameleon” behaved like an airborne virus.

This is partly because areas that are more densely populated have more APs in closer proximity to each other, which meant that the virus propagated more quickly, particularly across networks connectable within a 10-50 metre radius.

Alan Marshall, Professor of Network Security at the University, said that when “Chameleon” attacked an AP it did not affect how it worked, but was able to collect and report the credentials of all other wi-fi users who connected to it. The virus then used this data to connect to and infect other users.

When an APs was encrypted and password protected, the virus simply moved on to find those which weren’t strongly protected. Coffee shops and airports became hotbeds of infection.

Professor Marshall said that it was assumed that it was not possible to develop a virus that could attack wi-fi networks but the research demonstrated that this is possible and that it can spread quickly. 

Intel coming in the air tonight

Intel has come up with a new form of ultra-high-speed wireless tech which lets small base stations handle shedloads of data.

The technology is based around Chipzilla’s modular antenna arrays.

Intel has prototyped a chip-based antenna array that can sit in a milk-carton-sized cellular base station. If it works, and Intel claims that it does, the technology could turbocharge future wireless networks by using ultrahigh frequencies.

The tech is a millimeter wave modular antenna array, and will be shown off today at the Mobile World Congress conference in Barcelona, Spain.

It takes ultrafast capabilities that Samsung and researchers at New York University demonstrated last year using benchtop-scale equipment and packs it into a box-sized gadget. Cities would be carpeted with such small stations with one every block or two—and be capable of handling huge amounts of data at short ranges.

One cell could send and receive data at speeds of more than a gigabit per second over up to few hundred metres far more at shorter distances. It knocks the socks off 4G LTE which can only manage 75 megabits per second.

Both the Intel and Samsung technologies could eventually use frequencies of 28 or 39 gigahertz or higher. These frequencies are known as millimeter wave and carry far more data than those used in mobile networks. The downside is that they are easily blocked by objects in the environment.  Even rain can stuff them up.

To get around the blockage problem, processors dynamically shape how a signal is combined among 64, 128, or even more antenna elements, controlling the direction in which a beam is sent from each antenna array, making changes in response to changing conditions.

Intel says its version is more efficient than what has been seen so far.

It can scale up the number of modular arrays as high as practical to increase transmission and reception sensitivity.

If Chipzilla is right, the only barriers to the technology are regulatory not technological.

GPUs could find network use

A researcher working for the US Department of Energy’s Fermi National Accelerator Laboratory has found that GPUs make ideal tools for capturing details about network data.

Fermilab’s Wenji Wu told CIO that GPU-based network monitors keep pace with all the traffic flowing through networks running at more than 10Gbps.

As bandwidth has skyrocketed, network analysis tools have found it hard to keep up. To make matters worse, network admins want to inspect operational data in real-time.

All this is done with standard x86 processors or customer ASICs which are limited in what they can do. CPUs have the memory of a goldfish and tend to drop packets. ASICs have the memory bandwidth but are an arse to programme. They also can’t split processing duties into parallel tasks which is very important these days.

In a paper, Wenji that GPUs have “a great parallel execution model.” They offer high memory bandwidth, easy programmability, and can split the packet capturing process across multiple cores.

Nonitoring networks requires reading all the data packets as they cross the network, which requires more parallelism than you can poke a stick at.

Wenji has built a prototype at Fermilab to demonstrate the feasibility of a GPU-based network monitor, using a Nvidia M2070 GPU and an off-the-shelf NIC to capture network traffic.

Not only did it not catch fire, it could be easily be expanded with additional GPUs, he said.

The GPU-based system was able to speed performance by as much as 17 times. Compared to a six core CPU, the speed up from using a GPU was threefold.

If this is the case then the makers of commercial network appliances could use GPUs to boost their devices’ line rates. Developers could save a bomb by using pre-existing GPU programming models such as the Nvidia’s CUDA, Wenji said. 

Nvidia delays Project Shield

Nvidia’s handheld game-streaming device, Project Shield, is being delayed because of a few “mechanical” problems.

Shield’s critics say the ambitious project doesn’t make a great deal of sense for the price tag. The Android-based device’s main function is to stream PC games. Great,  but it has to be on the same Wi-Fi network as the host PC, the game has to be compatible, and you generally just have to stay pretty close to your computer.

Android games can be played whenever you like but, as early reviews of the Ouya box are highlighting, the Android market is too fragmented. Playing Grand Theft Auto on your phone is fine but bump this up to a bigger HD screen and the quality begins to suffer.

NVidia isn’t exactly competing with Sony and Nintendo. Its device is more of a complementary intrigue to dedicated PC gamers while Sony’s Vita and Nintendo’s DS, as well as the  Wii U controller, seem to understand their markets. But for someone interested in portable gaming, many people already have a good-enough Android phone for those games, and the console makers have dedicated platforms with their own exclusives. On the cheap, and outside of your wi-fi network.

With a modded Android device you can even easily sync up PS3 controllers.

Despite the above, some Nvidians are quite buzzed about the product, even if the rest of us aren’t sure why. It will be interesting to see the market response.

That market response should kick off some time in July, when the first shipments will go out. A day before the scheduled launch, Nvidia said it had discovered a “mechanical issue” in the device.

 

This is the second time that Nvidia has had to tinker with Shield. It recently cut the price from $349 to $299.

Nvidia builds huge neural network

Graphics chip maker Nvidia has revealed that it has helped Stanford University create the world’s largest artificial neural network built to model how the human brain learns.

The network is 6.5 times bigger than the previous record-setting network developed by Google in 2012.

Neural networks are capable of “learning” how to model the behaviour of the brain. They can recognise objects, characters, voices and audio in the same way that humans do.

Creating large-scale neural networks is extremely computationally expensive. Google used 1,000 CPU-based servers, or 16,000 CPU cores, to develop its neural network. This network taught itself  to recognise cats in a series of YouTube videos, which was not difficult as most YouTube videos are about cats.

The Stanford team, led by Andrew Ng, director of the university’s Artificial Intelligence Lab, created an equally large network with only three servers using Nvidia GPUs to accelerate the processing of the big data generated by the network.

Using 16 Nvidia GPU-accelerated servers, the team then created an 11.2 billion-parameter neural network which was 6.5 times bigger than a network Google announced in 2012.

The bigger and more powerful the neural network, the more accurate it is likely to be in tasks such as object recognition, enabling computers to model more human-like behaviour.

Sumit Gupta, Nvidia’s general manager of the Tesla Accelerated Computing Business Unit said GPU accelerators can bring large-scale neural network modelling to the masses.

Now any researcher or company can use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.