More
    HomeIn MediaComputersEnergy Savings in the Cloud

    Energy Savings in the Cloud

     

     

    Intelligence In Software: IT Software Strategy

    By John Moore for Intelligence In Software

     

    Cloud computing adopters may find an unanticipated edge when moving to this IT model: the ability to trim energy consumption.

    Cloud proponents often cite the ability to rapidly scale resources as the initial driver behind cloud migration. This elasticity lets organizations dial up — and dial down — compute power, storage and apps — to match changes in demand.

    The resulting efficiency boost cuts cost and makes for a solid business case. But the cloud also has a green side: The technology, when deployed with care, can significantly reduce your power usage and carbon footprint.

    The linchpin cloud technologies — virtualization and multi-tenancy — support this dual role. Virtualization software lets organizations shrink the server population, letting IT departments run multiple applications — packaged as virtual machines — on a single physical server. Storage arrays can also be consolidated and virtualized. This process creates resource pools that serve numerous customers in a multi-tenant model.

    Enterprises that recast their traditional data centers along cloud lines, creating so-called private clouds, have an opportunity to move toward energy conservation. A number of metrics have emerged to help chart that course. Power usage effectiveness (PUE)SPECpower, and utilization in general are among the measures that can help organizations track their progress.

    IT managers, however, must remain watchful to root out inefficient hardware and data center practices as they pursue the cloud’s energy savings potential.

     

    Working in Tandem
    EMC Corp.
     has found that streamlined IT and energy savings work in tandem in its private cloud deployment. The company embarked on a consolidation and virtualization push as its IT infrastructure began to age.

    Jon Peirce, vice president of EMC private cloud infrastructure and services, describes the company’s cloud as “a highly consolidated, standardized, virtualized infrastructure that is multi-tenant within the confines of our firewall.”

    EMC hosts multiple applications within the same standardized, multi-tenant infrastructure, he added.

    As for energy efficiency, virtualized storage has produced the greatest impact thus far. EMC, prior to the cloud shift, operated 168 separate storage infrastructure stacks across five data centers. In that setting, capacity planning became nearly impossible. The company started buffering extra capacity, which led to device utilization of less than 50 percent, explained Peirce.

    Poor utilization wasn’t the only issue. EMC found storage to consume more electricity per floor tile than servers in its data centers.

    “The first thing we did in our journey toward cloud computing was to take the storage component and collapse it to the smallest number we thought possible,” says Peirce.

    EMC reduced 168 infrastructure stacks to 13, driving utilization up to about 70 percent in the process. Consolidation reduced the overall storage footprint, which now grows at a slower rate going forward since the company doesn’t have to buffer as much.

    Storage tiering also contributed to energy reduction. In tiering, application data gets assigned to the most cost-effective — and energy-efficient — storage platform. EMC was able to re-platform many applications from tier-one Fibre Channel disk drives to SATA drives, higher-capacity devices that consume less electricity per gigabyte.

    “That let us accommodate more data per kilowatt hour of electricity consumed,” says Peirce.

    Moving data to energy-efficient drives contributed to the greening of EMC’s IT.

    Together, tiering and consolidation cut EMC’s data center power requirement by 34 percent, leading to a projected 90-million-pound reduction in carbon footprint over five years, according to an Enterprise Strategy Group audit.

     

    Getting the Most From the Cloud
    John Stanley, research analyst for data center technologies and eco-efficient IT at market researcher The 451 Group, said energy efficiency may be divided into two categories: steps organizations can take on the IT side, and steps they can take on the facilities side.

    On the IT side, the biggest thing a data center or private cloud operator can do to save energy is get as much work out of each hardware device as possible, says Stanley. Even an idle a server may consume 100 to 250 watts, so it makes sense to have it doing something, he notes.

    Taking care of the IT angle can save energy on the facilities side. Stanley says an organization able to consolidate its workload on half as many servers will spit out half as much heat. Less heat translates into lower air-conditioning requirements.

    “When you save energy in IT, you end up saving energy in the facilities side as well,” says Stanley.

    But the cloud doesn’t cut energy costs completely on its own. IT managers need to make a conscious effort to realize the cloud’s energy-savings potential, cautions Stanley. He says they should ask themselves whether they are committed to doing more work with fewer servers. They should also question their hardware decisions. An organization consolidating on aging hardware may need to choose more energy-efficient servers in the next hardware refresh cycle, he adds.

    In that vein, developments in server technology offer energy-savings possibilities. Today’s Intel Xeon Processor-based servers, for instance, include the ability to “set power consumption and trade off the power consumption level against performance,” according to an Intel article on power management.

    This class of servers lets IT managers collect historical power consumption data via standard APIs. This information makes it possible to optimize the loading of power-limited racks, the article notes.

     

    Keeping Score
    Enterprises determined to wring more power efficiency out of their clouds have a few metrics available to assess their efforts.

    Power usage effectiveness, or PUE, is one such metric. The Green Grid, a consortium that pursues resource efficiency in data centers, created PUE to measure how data centers use energy. PUE is calculated by dividing total facility energy use by the amount of power consumed to run IT equipment. The metric aims to provide insight into how much energy is expended on overhead activities, such as cooling.

    Roger Tipley, vice president at The Green Grid, said PUE is a good metric for facilities engineers who need to size a data center’s infrastructure appropriately for the IT equipment installed. PUE may be coupled with the PUE Scalability Metric, a tool for assessing how a data center’s infrastructure copes with changes in IT power loads.

    Peirce said EMC tracks PUE fairly closely and designed a recently opened data center to deliver a very aggressive PUE number. The highest PUE value is 1.0, which represents 100-percent efficiency. While PUE looks at energy from a facility-wide perspective, data center operators also focus more specifically on hardware utilization. Peirce said utilization is the main metric EMC uses, employing that measure to gauge both cost and energy efficiency.

    Stanley says utilization measures can get a bit complicated. A CPU may experience very high utilization, but have no essential business tasks assigned to it.

    “Utilization is not necessarily always the same thing as useful work,” he says.

    Other hardware-centric metrics include SPECpower_ssj2008 and PAR4Standard Performance Evaluation Corp.’s SPECpower assesses the “power and performance characteristics of volume server class computers,” according to the company. PAR4, developed by Underwriters Laboratories and Power Assure, is used to measure the power usage of IT equipment.

    The various measures can guide enterprises toward energy savings, but making the efficiency grade requires a concerted effort.

    “Yes, you can save energy by switching to a private cloud,” says Stanley. “But just saying ‘We are going to switch’ is not enough.”

    Photo: @iStockphoto.com/HowardOates

    has written on business and technology topics for more than 20 years. Moore’s articles have appeared in publications and on websites, including Baseline, CIO Insight, Federal Computer Week, Government Health IT, and TechTarget. Areas of focus include cloud computing, health information technology, systems integration and virtualization.

    David Novak
    David Novakhttps://www.gadgetgram.com
    For the last 20 years, David Novak has appeared in newspapers, magazines, radio, and TV around the world, reviewing the latest in consumer technology. His byline has appeared in Popular Science, PC Magazine, USA Today, The Wall Street Journal, Electronic House Magazine, GQ, Men’s Journal, National Geographic, Newsweek, Popular Mechanics, Forbes Technology, Readers Digest, Cosmopolitan Magazine, Glamour Magazine, T3 Technology Magazine, Stuff Magazine, Maxim Magazine, Wired Magazine, Laptop Magazine, Indianapolis Monthly, Indiana Business Journal, Better Homes and Garden, CNET, Engadget, InfoWorld, Information Week, Yahoo Technology and Mobile Magazine. He has also made radio appearances on the The Mark Levin Radio Show, The Laura Ingraham Talk Show, Bob & Tom Show, and the Paul Harvey RadioShow. He’s also made TV appearances on The Today Show and The CBS Morning Show. His nationally syndicated newspaper column called the GadgetGUY, appears in over 100 newspapers around the world each week, where Novak enjoys over 3 million in readership. David is also a contributing writer fro Men’s Journal, GQ, Popular Mechanics, T3 Magazine and Electronic House here in the U.S.

    Must Read

    gadget-gram
    lifestyle-logo
    image001
    rBVaVF0UN-
    GGRAM