artificial intelligence network concept

Artificial intelligence has become as meaningless a description of technology as “all natural” is when it refers to fresh eggs. At least, that’s the conclusion reached by Devin Coldewey, a Tech Crunch contributor.

GNU Toolchain now accepting donations with the support of the Free Software Foundation

BOSTON, Massachusetts, USA-Thursday, March 9th, 2017-The Free Software Foundation is now accepting donations to support the GNU Toolchain, a collection of foundational freely licensed software development tools. Many pieces of software depend upon the GNU Toolchain, including the GNU/Linux family of operating systems which runs the majority of Web servers, millions of personal devices and the most advanced supercomputers.

[slides] @Dyn’s Cloud #APM and #NPM at @CloudExpo | #AI #ML #Monitoring

To manage complex web services with lots of calls to the cloud, many businesses have invested in APM and NPM tools To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management and Network Performance Management tools. Together APM and NPM tools are essential aids in improving a business’s infrastructure required to support an effective web experience… but they are missing a critical component – Internet visibility.

[slides] @Dyn’s Cloud #APM and #NPM at @CloudExpo | #AI #ML #Monitoring

To manage complex web services with lots of calls to the cloud, many businesses have invested in APM and NPM tools To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management and Network Performance Management tools. Together APM and NPM tools are essential aids in improving a business’s infrastructure required to support an effective web experience… but they are missing a critical component – Internet visibility.

CalTech, Berkeley Lab team uses new high-throughput method to…

Using high-throughput ab initio theory in conjunction with experiments in an integrated workflow, researchers at Caltech and Lawrence Berkeley National Laboratory have identified eight low-band-gap ternary vanadate oxide photoanodes which have potential for generating chemical fuels from sunlight, water and CO2. A report on their methodology and the new materials is published in the Proceedings of the National Academy of Sciences .

New Materials Could Turn Water into the Fuel of the Future

Scientists at the Department of Energy’s Lawrence Berkeley National Laboratory and the California Institute of Technology have-in just two years-nearly doubled the number of materials known to have potential for use in solar fuels. They did so by developing a process that promises to speed the discovery of commercially viable generation of solar fuels that could replace coal, oil, and other fossil fuels.

HXT Semiconductor Joins Linaro to Accelerate Advanced Server Development on ARM

Budapest, Hungary – March 6, 2017 – Linaro Limited, the open source collaborative engineering organization developing software for the ARM® ecosystem, today announced that Guizhou Huaxintong Semiconductor Technology Co., Ltd has joined Linaro as a member of the Linaro Enterprise Group . “We’re very pleased to welcome HXT Semiconductor as a LEG member and we look forward to helping them accelerate the deployment of ARM based server solutions into China’s cloud computing and data center industries,” said George Grey, Linaro CEO.

High Performance Computing Market: Global Industry Analysis, Size,…

Zion Market Research, the market research group announced the analysis report titled ‘ High Performance Computing Market: Global Industry Analysis, Size, Share, Growth, Trends, and Forecasts 2016-2024 ‘ Global High-Performance Computing Market: Overview High-performance computing technology is the use of advanced processes in order to increase the performance and efficiency of the desktop workstation or computer for solving the problems related to engineering, science, and business in comparison to the conventional computing technologies. The high-performance computers have more than one processor with different nodes that range in size from 16 to 64 nodes.

Federal government buying new radar system to better detect severe weather

The government announced Tuesday that it has signed an $83-million contract for 20 state-of-the-art weather radars that are to be built across the country over seven years starting this fall. Environment Minister Catherine McKenna said the radars, along with a recently acquired supercomputer, will give people more time to protect themselves and their property from severe weather.”

New SuperBlade from Supermicro Revolutionizes Market, Delivering…

The new 8U SuperBlade supports both current and new generation Intel Xeon processor-based blade servers with the fastest 100G EDR InfiniBand and Omni-Path switches for mission critical enterprise as well as data center applications. It also leverages the same Ethernet switches, chassis management modules, and software as the successful MicroBlade for improved reliability, serviceability, and affordability.

Mellanox Sets New DPDK Performance Record With ConnectX-5

Technologies, Ltd. , a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that its ConnectX-5 100Gb/s Ethernet Network Interface Card has achieved 126 million packets per second of record-setting forwarding capabilities running the open source Data Path Development Kit . This breakthrough performance signifies the maturity of high-volume server I/O to support large-scale, efficient production deployments of Network Function Virtualization in both Communication Service Provider and cloud data centers.

Fastest computer in the West might run out of time

The $30 million, house-sized supercomputer named Cheyenne belongs to a federally funded research center. It began work a few weeks ago crunching numbers for several ambitious projects, from modeling air currents at wind farms to figuring out how to better predict weather months to years in advance.

Want to See the Future of Networking? It’s in Research & Education Today

In a demonstration organized by the International Center for Advanced Internet Research at Northwestern University and the Advanced Internet Research group of the University of Amsterdam, researchers utilized computing clusters in San Diego and Amsterdam to give an example of how computation doesn’t need to be localized within a data center, but can instead be migrated across geographical distances to run applications and external clients with minimal downtime. You might be asking yourself- wait, isn’t that just “the cloud”? The cloud has been omnipresent for several years now, so why are research institutions just now looking into it? Well, they’re not.

Comments

A new supercomputer in the top coal-mining state has begun critical climate-change research with support from even some global warming doubters, but scientists worry President Trump could cut funding for such programs. The $30 million, house-sized supercomputer named Cheyenne belongs to a federally funded research center.

AI helps Nvidia reach record revenues

Nvidia has reported fourth quarter Fiscal Year 2017 of US$2.17 billion, up 55% from US$1.40 billion a year earlier, and up 8% from US$2.00 billion in the previous quarter. For fiscal 2017, revenue reached a record US$6.91 billion, up 38% percent from US$5.01 billion a year earlier.

ORNL, Chattanooga Electric Power Board Test the Role of Sensors in Grid Innovation

February 8, 2017 – With a fiber-optic network that provides Chattanooga residents and businesses with exceptional high-speed communications, the city’s Electric Power Board provides the US Department of Energy’s Oak Ridge National Laboratory with an ideal testbed for smart grid research. With support provided by DOE’s Office of Electricity Delivery and Energy Reliability , an effort was launched in 2014 to advance the state of the power grid in Tennessee.

Global Supercomputer Market 2017-2021

A supercomputer is defined as a system that is designed to solve problems or issues that require an extraordinary number of computations across fields such as engineering, science, and business. It is a computing device that is designed for speed rather than for cost-efficiency.

Supercomputing, experiment combine for first look at magnetism of real nanoparticle

Barely wider than a strand of human DNA, magnetic nanoparticles — such as those made from iron and platinum atoms — are promising materials for next-generation recording and storage devices like hard drives. Building these devices from nanoparticles should increase storage capacity and density, but understanding how magnetism works at the level of individual atoms is critical to getting the best performance.

High Performance Computing (HPC) Market Worth 36.62 Billion USD by 2020

According to a new market research report “High Performance Computing Market by Components Type , Services, Deployment Type, Server Price Band, Vertical, & Region – Global Forecast to 2020” , published by MarketsandMarkets , the High Performance Computing Market estimated to grow from USD 28.08 Billion in 2015 and projected to be of USD 36.62 Billion by 2020, at a high Compound Annual Growth Rate of 5.45% during the forecast period. Browse 66 market data Tables and 34 Figures spread through 162 Pages and in-depth TOC on “High Performance Computing Market – Global Forecast to 2020” Early buyers will receive 10% customization on this report.

New Chancellor has great plans for Nalanda University

Pune, Jan 30 – Vijay Pandurang Bhatkar, the new Chancellor of Nalanda University, plans to explore collaboration with ancient abodes of learning, especially in Asia, as he prepares to take charge next month. Nalanda was a renowned place of learning and it was amazing how such universities of great repute were created in the great golden days in our country, technocrat Bhatkar said.

pyCroscopy 0.0a24

A python package for image processing and scientific analysis of imaging modalities such as multi-frequency scanning probe microscopy, scanning tunneling spectroscopy, x-ray diffraction microscopy, and transmission electron microscopy. Classes implemented here are ported to a high performance computing platform at Oak Ridge National Laboratory .

Parallelization and High-Performance Computing Enables Automated…

Statistical inference for multi-scale models using high-performance computing Parallel implementation of the ABC SMC algorithm Study of tumor spheroid growth in droplets using growth curves and histological data Proof of principle for fitting of mechanistic model with 106 single cells Mechanistic understanding of multi-scale biological processes, such as cell proliferation in a changing biological tissue, is readily facilitated by computational models. While tools exist to construct and simulate multi-scale models, the statistical inference of the unknown model parameters remains an open problem.