Está en la página 1de 13

Since 1995, the amount of power generated around the world has

grown by about 2.4% a year. And between 2004 and 2030, the global
demand for electricity is set to double from 17,408TWh to
30,364TWh. This will require over US$10 trillion of investment.
Read more »

Success stories

• • Huaneng
• • Nike

This overview takes a look at power - predominantly electricity generation. Around 40% of fossil fuels and
other energy sources used in 2004 were used to create electricity.

Since 1995, the amount of power generated around the world has grown by about 2.4% a year. However, the
demand for electricity is growing significantly in developing countries, such as China and India. And between
2004 and 2030, the global demand for electricity is set to double from 17,408TWh to 30,364TWh. This will
require over US$10 trillion of investment.

Burning so much fossil fuel presents a serious environmental challenge. In 2004, greenhouse gas emissions
from the power sector came to 12.7 GtCO e . This made up 26% of total global greenhouse gas emissions, half
2

of which came from the US, China and the EU. Unless something is done, emissions could increase to 17.7
GtCO e by 2030 .
2

But with smart investment in low carbon energy infrastructure and enabling policies, the power sector could
reduce the intensity of its emissions. At a carbon price of €40/tCO , this reduction is estimated to be 35% .
2

The knock-on benefits would also include technological innovation, lower fuel costs and job opportunities. In
the US alone, it is forecast that over 3 million new jobs will be created in the renewables sector by 2030 .

The power sector needs to be decarbonised but there are obstacles in the way. It’s expensive to upgrade T&D
networks. It’s also currently expensive to develop and deploy CCS. And as long as low carbon power generation
is more expensive than traditional methods, green power will never be mainstream.

This is where government needs to step in to incentivise the power industry, business and society to move to a
low carbon economy. If low carbon energy infrastructure isn’t put in place now, the world will be ‘locked in’ to
the emissions from these power stations for the length of their working lives (of up to 100 years).

World electricity:
Success stories: Berlin, Germany:

Energy savings and CO reductions


2

ESCOs have invested more than €43,125,882 in efficiency projects in more than 1,400 buildings throughout
Berlin, resulting in CO reductions of more than 60,400 tonnes per year. Retrofitted buildings have generated
2

energy savings of €10,164,848 or nearly 26% of their total energy bills.

Jonson and Johnson


CO savings and financial return
2

As of the end of 2007, Johnson & Johnson has approved 51 projects valued at US$99 million, 31 of which have
been completed. Projects include boiler upgrades, HVAC enhancements, combined heat and power projects,
solar installations, and chiller upgrades. To date, the completed projects have produced an average IRR of more
than 16%. Once all 51 projects are complete, Johnson & Johnson expects to reduce its CO emissions by 90,044
2

tonnes per year, helping the company m

Green practices in the real world – Rackspace Hosting’s take on


energy efficiency
Posted by Tony Chan on Oct 15, 2008 in Applications, Data centres, ICT, Networks

Green Telecom’s Tony Chan speaks to Rackspace Hosting chairman, Graham Weston, on the company’s
approach to green practices, what’s makes a green data centre and what’s the real significance of
virtualisation.

Green Telecom: So what is Rackspace Hosting’s approach to ‘green’ practices?

Graham Weston: Two three years ago, we actually switched to AMD processors because they drew less energy.
We intentionally, by one decision, we went with AMD. It saved on our power bill, our carbon footprint, it helps
us be more efficient and be more productive in the data centre. That was an important inflection point.

The first thing we decided was: we are going to make sure that customers always have a choice of using servers
that are more efficient. Like in America, a lot of people drive big cars – they don’t need them, but they drive
them. A lot of people like bigger power hungry servers over less power hungry ones. I think the first thing is to
give customers the choice and to say ‘use this server, because here is how you can contribute to fight global
warming. I think the basic unit of green is to engage the individual. So what we look to always do is to give the
customers – there are some decisions that we make that is just the way of operating – others are about giving
our customers the choice to do things for themselves. We are going to let them buy a smaller car instead of an
SUV, so to speak. Giving them that choice lets the customer to say, ‘yes, I think I’m going to participate here.’

Sometimes, it is not appropriate for them, but we want to give customers the choice.

The second thing is we went and designed a data centre with very efficient air conditioning system. In a data
centre, you have a lot of power being burnt in the servers, but every watt of that power goes to heat – electricity
goes to either light or heat and there’s no light, so it all goes to heat. So if the server burns 300 watts, it’s all
heat, so we have to take it out with air conditioning. It is a huge volume to remove. The way we remove it is one
of the opportunities to be green. The way we do it is install air conditioning units that are heat exchangers. What
that means is that you can either remove the heat through mechanical means through a compressor. Like the air
conditioning in your house, you have a compressor that uses all the power. Ordinarily, you have a system that
dissipates the heat and you have a compressor that compresses the refrigerant that discharges the heat. What we
did was we brought a system that does this passively. So what a passive system allows you to do – say the
temperature of the air inside is 22 degrees, if the air outdoor is 22 degrees, then all you have to do is take the air
from outside and exchange it with the air from the inside.

This is not rocket science. This is not genius. It means that once we started paying attention, we can reap the
benefits.

So is power a big cost component for Rackspace?


It’s a big expense, but it’s a small percentage. Power is less than 2% of our revenue. It’s still a large number. We
are a company that last year, did US$360 million. Last quarter, we did US$130 million, so 2% of that is US$2.6
million, so roughly US$900,000 a month. It’s a big bill, but it is still a small percentage of our revenue because
most of our revenue is for service. That is up substantially from last year – it’s a noticeable rise.

We want to be sure that we have an alternative for them and we can explain to them the clear advantage that
brings them. It’s not really a cost thing for us, but a green thing. If we are managing a server for our customers
for US$500, 2% of that is power, so US$10 – it’s not enough to matter. The green servers are usually more
expensive.

There’s a lot of talk on power efficiency of data centres, what is your view on the topic?

Efficiency is work divided by energy. If you have a certain amount of processing power divided by the energy it
uses, you could end up with an efficiency rating for the server.

On the data centre level, a lot of the power equals cooling, and redundancy. So we have a lot of systems that sit
around and do nothing. If you want to make a data centre more efficiency, just make it less redundant. None of
our customers are willing to take that trade off. The other problems is that if we put our data centres in Iceland it
would be more efficient, it would be more efficient than if we put them in a hot place because cooling would be
cheaper the colder you are. So if you are in Iceland, your ability to do passive cooling is much greater.

The problem is that it is hard to operate there. We are going to look at it, but I think it’s going to be a long shot.
We have to be able to get servers there, generator experts there, but you never know, we’ll see.

Another issue is the power source. Is it coal? Is it hydro? Is it nuclear? That’s what makes it hard to have a
standard. I would say, let’s have an Energy Star on routers, switches, UPSs, the air conditioning system itself –
the things that draw energy. It’s like the air conditioner in your house. You can buy the really expensive one that
is more efficient, or you can buy the cheap one that is not. When you make these decisions in the data centre,
you end up with a more efficient data centre. The biggest factor is the efficiency of the server, which is hard to
measure – in theory you can do it.

I think there’s a perception that data centres are a new problem. Say Rackspace manages 40,000 servers and that
grows hundreds per month. So you can say, Rackspace, a million dollar of power, you are a polluter, etc. The
dilemma of it is that we are taking the servers that the customers use to manage and managing it for them.

The largest users in the world still end up being companies who have a server room in the back in the office.
What we do is transplant them from there to us. What we are doing is taking the servers that previously were in
the server closet in the business and now we have them. So nothing has really happened. They had air
conditioning in their offices that pulled the heat out – we have it.

What about cloud computing?

The server sits there most of the time, and spikes when there is an activity such as an email blast that you send
out. Most of the time, it just sits there – this is the really wasteful thing.

The power demand rises obviously when the server is working harder, but only marginally. The ideal thing is to
figure out a way to smooth out the demand, so we have the server running at full throttle all the time, because
you can get more work done for the same amount of energy.

So the way you do that, is through cloud computing. It means it is pooled, it means it is shared – I don’t like the
word share because it raises some questions and issues because of security, but pooled. It means that if you have
1000 servers and you have a Terabyte drive in each one, so you have 1000 Terabytes of data storage – that’s an
unbelievable amount of storage and the whole city of Hong Kong probably didn’t have 1000 Terabytes of
storage five years ago. Then you have dual processor with four cores, so you have eight cores per server – that’s
8000 cores.

These two resources can be used full throttle all of the time, or they can be used in a slight way. If they are used
in the traditional way, the servers sit around and runs at 50%, 60%, 70% utilization with the air conditioning
running, with the batteries being charged, with the routers running, everything has to run. It’s like leaving your
car on idle all day long.

The real opportunity I think is, ‘look, instead of having a thousand servers doing this, doing a certain amount of
work. How can we make the 1000 servers do more work?’ The answer is by using virtualization and cloud
computing to level out the load. We think the load is around 4x. So the amount of work that can be done can be
four times of what it is today using cloud computing vs the way it is done today.

We have two cloud computing services. One is email hosting. If you have a company that sends email for you,
that server can be running full throttle all day. What we have, and I don’t have the exact number, is 400-500
servers that run 800,000 mail boxes. That means we are running 2,000 mail boxes per server. Think about the
server in the average business, it is running 10 mail boxes.

So if we use mail boxes as a measure of work, say a hotel has 100 employees divided by one server, is
productivity of 100. We have 2,000 mail boxes on a server, so that is 20x. Also, they are not running it all on
one server, it’s a factory. Because if you want to run mail correctly, you need an inbound server, an outbound
server, a virus server, a spam server and in some cases, there’s a Blackberry server, so it’s really five servers. So
100 divided by 5 is 20. So its between 20 per server, compared to 2,000, so its 100x.

What matters is utilization. You can theoretically come up with how many cycles a server produces and come
up with a benchmark test, but what matters is utilization. So you want to be able to balance the load.

When we first started we had bandwidth that we had to pay for all day long, but that was only peaking at say, 7
pm, and the rest of the day, it was underutilised. So we opened up our UK office so we can sell the excess
bandwidth during off-peak hours (because the peak hours for the UK is different from the peak hours in the
US), because it’s free anyways. What we did was change the utilization curve to get more utilization out of it.

And this is the same basic idea with servers. Cloud computing, or pooled computing, will allow us to have the
peak to trough utilization much better. Today, Rackspace has 40,000 servers, most of them dedicated to
customers running in a curve. But if we can take the 40,000 servers and have them run like our bandwidth, we
can do way more work. That’s the function of green, how we can use cloud computing.

Is this like virtualization?

Think about virtualization as the engine of a car. By itself it’s kind of useless, but if you put a car around it and
put the power to the wheels then it works. The server itself has to run software on it. So in a normal software
call the OS, then you run an application on top of that. In order to run Word on your PC, you need a processor,
you need an OS to run on top of that, and you need Word to run on top of that. But if you have five people
logged into a computer all running Word, it’s not going to work very well, because all five people are trying to
shared the applications, and competing for resource – it’s like we are all trying to drink from the same cup of
tea. You can do it, but I like to have my own cup.

Virtualization actually virtualises the server, so there is a layer that takes that server and splits it up into little
server. These servers mean that we each get our own tea cup. We are going to have less, but we are all going to
have our own.

So when you load Windows running Word, Windows is fooled into thinking that it has its own cup of tea, but
what happens is when the other little servers are not running, the power gets transfer to the one that is running
the applications. To Windows, it does not even know that it is running on a computer with other people. So it
doesn’t even know that there’s a teapot and it is sharing the tea with other tea cups. All Windows knows is that it
has its own tea cup.

What virtualization allows you to do is to take – it’s like cloud computer but on the server level – so instead of
being one user on a server, you can have, say, 20 users. But they can’t all run at full throttle. The idea is to load-
balance and get all of them to work and get utilization up.

So the first element of cloud computing is virtualization. The second part of it is, ‘here’s two servers in the data
centre, and here’s all the little machines (the virtualized servers), and when one is filled up, they can moved it to
the other server. At the end, the result is you get servers that run at 90%, instead of 20%. The thing is that all the
software that developers have given us today is software that let you virtualize one server very well, but it won’t
allow you to share between servers very well, and it won’t allow you to expand or contract this container.

My point really is that if you run in a cloud basis, it’s automatically more green. What we are doing is trying to
take the capacity of the world today, how we can make the world’s servers more efficient.

The first important technology in cloud computing is the ability to create these virtual machines. But that is not
really enough, because it only allows you to use multiple users on a single machine. The software that allows
you to expand and contract the VM and the ability to move it across servers will be the next critical step. The
key point is to have companies like Rackspace provide computing, so that more computing is provided by
hosting companies – these will make computing more green.

One of the challenges is to convince companies to run applications on servers that are also running applications
from other companies. But it’s not that hard a concept to get across. When you go to a bank, they don’t store
your money in a corner by itself, they pool it, it’s the same concept.

So what’s next?

We have another offering called cloud FS, which is for file sharing. That’s in beta right now, and it is being
launch. That is going to be a massive pool of storage. So if you think about your music, you upload your music
to us and we’ll put it on some servers here, but the next time you back it up or upload, it will be on another
group of servers, and so on.

The key is the software that will make it all look like one storage. That is the sort of thing that hosting
companies are thinking about when delivering cloud computing services to customers.

We are the ones that have been delivering computing as a service for 10 years, and we are the ones developing
the software to allow this to work.

Cloud computing is a key component to IT efficiency


Posted by Tony Chan on Oct 15, 2008 in Applications, Climate Change, Data centres, Featured, ICT, Networks

The emerging capabilities of cloud computing environments holds the potential to drive efficiency of IT
infrastructure as much as 100 fold, according to estimates of one application services provider.

READ the full interview with Graham Weston here


According to the chairman of Rackspace Hosting, Graham Weston, cloud computing, by driving up the
utilization of data centre infrastructure, can demonstrate massive efficiency gains verses traditional stand alone
systems.

“Most of the time, it (a traditional server) just sits there – this is the really wasteful thing,” Weston said. “The
power demand rises obviously when the server is working harder, but only marginally. The ideal thing is to
figure out a way to smooth out the demand, so we have the server running at full throttle all the time, because
you can get more work done for the same amount of energy.”

The way that can be accomplished is through cloud computing, which means that resources are “pooled,” he
explained. “It means that if you have 1000 servers and you have a Terabyte drive in each one, so you have 1000
Terabytes of data storage – that’s an unbelievable amount of storage and the whole city of Hong Kong probably
didn’t have 1000 Terabytes of storage five years ago. Then you have dual processor with four cores, so you have
eight cores per server – that’s 8000 cores. These two resources can be used full throttle all of the time, or they
can be used in a slight way. If they are used in the traditional way, the servers sit around and runs at 50%, 60%,
70% utilization with the air conditioning running, with the batteries being charged, with the routers running,
everything has to run. It’s like leaving your car on idle all day long.”

Cloud computing is a way to get those same servers to run at higher rates, for more amount of time.

“The real opportunity I think is, ‘look, instead of having a thousand servers doing this, doing a certain amount
of work. How can we make the 1000 servers do more work?’ The answer is by using virtualization and cloud
computing to level out the load.”

While he initially estimates the work load of cloud computing over traditional systems at 4 times, a back of an
envelope calculation for a hotel with 100 employees and Rackspace’s hosted email service reveals an efficiency
gains of up to 100 times.

100X EFFICIENCY GAINS


The company’s cloud computing service consists of an email hosting offering that allows companies to
outsource the email accounts of employees to Rackspace. By consolidating the email boxes of multiple
companies onto its servers, the company is currently supporting 800,000 mail boxes on between 400 to 500
servers.

“That means we are running 2,000 mail boxes per server. Think about the server in the average business, it is
running 10 mail boxes,” he said. “So if we use mail boxes as a measure of work, say a hotel has 100 employees
divided by one server, is productivity of 100. We have 2,000 mail boxes on a server, so that is 20x. Also, they
are not running it all on one server, it’s a factory. Because if you want to run mail correctly, you need an
inbound server, an outbound server, a virus server, a spam server and in some cases, there’s a Blackberry server,
so it’s really five servers. So 100 divided by 5 is 20. So it’s between 20 accounts per server, compared to 2,000,
so its 100x.”

When it comes to measuring efficiency, Weston says that what matters is utilisation.

“When we first started we had bandwidth that we had to pay for all day long, but that was only peaking at say, 7
pm, and the rest of the day, it was underutilised,” he pointed out. “So we opened up our UK office so we can
sell the excess bandwidth during off-peak hours (because the peak hours for the UK is different from the peak
hours in the US), because it’s free anyways. What we did was change the utilization curve to get more
utilization out of it.”

He added: “And this is the same basic idea with servers. Cloud computing, or pooled computing, will allow us
to have the peak to trough utilization much better. Today, Rackspace has 40,000 servers, most of them dedicated
to customers running in a curve. But if we can take the 40,000 servers and have them run like our bandwidth,
we can do way more work. That’s the function of green, how we can use cloud computing.”

Nortel’s latest salvo – global Cisco Energy Tax equals 23m cars on the
road
Posted by Tony Chan on Oct 14, 2008 in Climate Change, Data centres, Networks, broadband

Nortel’s director of enterprise technology, Tony Rybczynski, writes in his blog that the sum emissions from the
world’s install base of Cisco equipment is equivalent to over a trillion kilometres of travel in a small car.

“Across some 500 million Cisco enterprise ports globally, Cisco is adding an additional 11.5 MILLION metric
tons of CO2, that wouldn’t be added to the atmosphere if those ports were Nortel,” Rybczynski wrote on its The
Hyperconnected Enterprise blog. “That is equivalent to 656 BILLION miles (over a trillion kilometres) of travel
in small cars or 23 MILLION cars at 30,000 miles per year!!!!”

According to the Rybczynski’s post, switching a 2,500 user network with GigE desktops and IP telephony from
Cisco to Nortel switches would offer CO2 emission savings of 7,106 metric tons, equivalent to 150 large cars or
249 small cars driven 100,000 miles each over 5 years.
“In fact, over a five-year period, businesses worldwide are spending $6.1-billion more in energy costs to power
and cool Cisco networks than they would have had they used a comparable Nortel solution,” he said.

See Rybczynski’s original post at: http://blog.tmcnet.com/the-hyperconnected-enterprise/green-it/ciscos-human-


network-effect-taxes-and-co2-emissions.asp

Current Article
HP tops ABI “Green Data Center” vendor matrix
Posted by Tony Chan on Oct 16, 2008 in Data centres, Green corporations, ICT

ABI Research has placed HP at the top of a new “Green Data Centre” vendor matrix, which analyses vendors
according to their “innovation” and “implementation” across several criteria.

For the matrix, under “innovation,” ABI Research examined the firms’ carbon footprints, their regulatory
compliance, recycling efforts, their efforts at “greening” internal operations, their use of video and
telecommuting, and their membership and participation in environmental organizations.

Under “implementation,” ABI Research scrutinized the following criteria: the firms’ product portfolios, their
product features, their intellectual property holdings, their certification achievements, and the planning and
virtualization tools they use.

Hewlett-Packard received points for its innovative Dynamic Smart Cooling technology as well as for its
homegrown power distribution system and its wide selection of low-power component choices for its
customers. It trailed only IBM in intellectual property and green services offerings, and received extra points for
its fine virtualization software that spans various product categories.

According to ABI Research vice president and research director Stan Schatt, “HP should be complimented for
its extensive internal green efforts. While it has not received quite the publicity of IBM’s Big Green efforts, the
company has perhaps the most extensive list of carbon goals of any vendor, with clear accountability each year
on which goals have been met. Cisco’s focus on network switching and storage means that it cannot offer quite
as broad a portfolio of products and services under one roof but must rely on vendor partners to fill technology
gaps and channel partners to provide integration services. Still, Cisco joins HP at the top of any listing of
vendors’ internal green efforts.”

IBM and Cisco claimed the second and third spots in the Green Data Centre matrix. IBM and Cisco both scored
over 90% in the Implementation category.

“It is clear that “Big Blue” is now coloring itself very green – IBM has devoted a billion dollars a year to its
green R&D efforts,” ABI said. “Its equipment has not scored as high on a number of third-party power
consumption tests, but the caveat is that these tests were paid for by its competitors. Its Active Energy Manager
software reveals the advantage of not having to integrate third-party products to fill product gaps. IBM’s
intellectual property is unmatched by any vendor, although both HP and Cisco would certainly be in the top 5%
of any list. The company’s chilled water cooling technology gives it a good green story to tell, and its services
related to environmental controls are the most comprehensive of the top three vendors.”

Meanwhile, the research firm praised Cisco for “a number of very useful power consumption tools and design
help for customers who want to green their data centers.”

“In addition to offering tools for virtualization, its VFrame Data Center software dynamically partitions,
provisions, and assigns computing, network, and storage resources to different applications through an
intelligent network fabric,” ABI said. “Cisco has had a number of its switches certified as green by Miercom
Labs. The company argues that its Service Module architecture approach – in which applications are
consolidated on a single switching platform rather than run on separate appliances – not only saves money but
also results in much lower overall power consumption. Cisco’s focus on network switching and storage means
that it cannot offer quite as broad a portfolio of products and services under one roof, but must rely on vendor
partners to fill technology gaps and channel partners to provide integration services. The result is a slight
disadvantage when it comes to tightly integrating green strategies across all product categories, something that
HP and IBM are able to do very effectively.”

How Intel measures its environmental performance


Posted by Tony Chan on Aug 6, 2008 in Applications, Data centres, Featured, Green corporations, ICT,
Renewable Energy

As the world’s biggest manufacturing of computer microprocessors, you can say that Intel is at the heart
of the information revolution. Throughout the years, the company has managed to reduce the energy
consumption of its chips in the market place, but can its own efforts to reduce emissions compete with the
global demand of its products? In this interview with Dave Stangis, Intel director of corporate social
responsibility, Green Telecom editor, Tony Chan, finds out what the company is doing to reduce not only
the environmental impact of its products, but inside its operations, its supply chain, and its energy
procurement policy. More importantly, we find out how Intel measures and quantifies its environmental
performance.

What are some of the core initiatives that Intel has implemented to reduce energy consumption across its
operations?

In terms of our core initiatives, we take a look at it in kind of a life cycle approach. So we look at the life of a
chip from the time it is conceived, working on designing the process, working with the manufacturer to
optimise energy use. Our approach is to design for the environment, designing in more energy efficiency in
terms of our process, so that we are comprehending it upfront in the design process, so that when it comes to
market, it uses less energy, and offer that promise and performance to our customers.
It goes beyond that. The biggest foreign issue is within our factories – they are very large and require a lot of
energy and resources. Alone just on energy, we have a dedicated capital funding program just focused on energy
conservation. For example, from 2001, we’ve spent about US$20 million in energy conservation projects, which
saved US$1.2 million dollars and 500,000 kWh of energy. That’s just in our own manufacturing facilities.

When you looked outside the company, to when the chips are in the market place, there’s been a big change in
Intel’s energy efficiency portfolio, technologies like dual core, which offers 40% reduction in energy
consumption while doing 40% more work. The data, as of last May, that there were enough dual core processor
out in the market place, that the work they did and the energy they saved, compared to previous generations,
was equal to taking 2 million cars off the road. Now we are up to taking 4 million cars off the road in terms of
climate impact.

The fourth pillar for Intel is leadership, leadership out in the industry, among our peers, competitors. Here is
where you find things like the Climate Savers Computing initiative that we launched last year, our focus on
energy efficient design centres, our climate change policy worldwide.

Those are kind of the big four pillars that represent our core initiatives – product design and manufacturing, the
actual product in the market place, reducing emissions and leadership.

What methodology does Intel deploy for measuring its carbon emissions? What is being done contain and
reduce emissions?

There are standard methodologies out there. As you are probably aware, the Carbon Disclosure Project has
methodologies for energy measuring, climate change emissions, climate impact, and so on. Part of that
methodology is basically measuring the energy that we buy and the carbon emission we produce, and mentions
strategies that we take at containing those emissions across the board with an absolute goal of reducing them by
40% by 2012. So we target to reduce our energy use by a measurable amount every year.

Also, we are one of the largest users of renewable power in the US, in particular, wind power, hydro, biomass,
solar. In fact, we don’t really talk about offsetting any particular product or plant, but renewable energy use
represents 46% of our US energy consumption.

The strategy going forward is: We want to continue to do the right thing in the renewable power space, so we
are looking to install real projects in our facilities worldwide. We are focusing on a decision criteria that
comprehend costs and comprehend return on investment. So we are considering the installation of some solar,
solar water projects at some of our sites. And we are looking at every site that we operate worldwide, and we
are looking at the kind of technology that we can implement worldwide, and using that to base our investment
decisions. We will probably install or initiate the installation of three projects this year alone.

In Intel’s CSR report, there’s a point that Intel is on track to meet a goal to reduce emissions per unit of
production, does Intel measure its environmental performance in any other way, such as emissions per revenue,
per employee?

We are on track to reduce our emission per unit of product. This normalization concept of per chip, per
manufactured unit, per revenue, per employee, is certainly something that’s around for a while. For Intel, we’ve
been reporting our environmental performance since 1994, so we’ve been doing it for a long time.

We have experimented with different kinds of normalization methods. We started out with normalizing on
revenue, basically a trend line that tells energy, emissions, water, waste and so on, divided by revenue. But in
the transparency space, in the reporting movement, there’s a move towards a more tangible normalization act.

So we actually report on normalised per production chip, but we also report on all employees, employees by
site, employees by region, in revenue, income. All these things, and we exposed our data sheet with all of our
environmental data, so if you were interested in energy use per country, you can dig through all of that based on
that data. It is very transparent.

In terms of environmental reporting, it is still very early and explored by many organizations. There are some
companies that issue their environmental report along with their annual report, but always as a separate report,
does Intel actually integrated its environmental reporting inside the annual report?

Yes, it is an annual process for us. We publish our report according to global reporting guidelines, and we put it
on the Web in PDF. We create an executive summary and distribute it to people around the world. We have a
full web page where you can scroll through all the data and we publish some working reports as well. We
launch the reporting initiative in conjunction with our annual stockholders’ meeting, the general meeting of the
company, and we reference data and the report in our financial statements, so there is correlation.

Does Intel have a green procurement policy?

Inside the company, we have developed goals for Intel’s suppliers which will basically reward our suppliers for
offering additional green procurement options. So there are green procurement policies in terms of materials,
recycleability, packaging, and so on. What we have done this year for the first time, is to set up a criteria for all
our suppliers that they will have the next three years to become incrementally more environmentally friendly.

At the end of 2008, we expect the suppliers to have an environmental rating of excellent. At the end of 2009, we
expect to them to have goals in terms of environmental performance. By 2010, we expect our suppliers to have
published performance metrics on the environmental initiatives. We set out a roadmap for our suppliers to
become more environmentally responsible.

Does Intel set internal goals - performance indicators - for individual departments/locations?

There are internal goals. What happens in Intel is that we translate our external goals from our operational
goals. Inside the company, there are multiple processes that tie goals to different initiatives for flash memory,
microprocessors, servers, and so on. So each of those initiatives has their own set of internal goals, but as a
company, those are much too complicated to try to translate for the external world, they wouldn’t understand it.
We have a lot of work in translating those corporate goals to talk about the whole company. So yes, we have our
internal goals, and we work to translate those to the market.

As a technology leader, is Intel working on anything beyond making its chips more energy efficient, such as
sensors for managing energy consumptions, applications for reducing travel emissions, work at home programs,
etc?

There’s a lot of work in the server and design space, some work with the US Environmental Protection Agency
on standards for low energy use, external efforts on reducing the energy use of desktops, initiatives with
universities and so on. We are also beginning to look at using our technology to manage energy use across
different markets, optimizing travel routes for lower emission. We’ve had for years, a telecommuting program,
teleconferencing programs to minimise travel emissions. We still have some work on quantifying all of that
impact on a positive stand point.

Intel recently spun off some technology with the establishment of SpectraWatt, what can Intel’s silicon-level
expertise brings to the solar power market?

This is a little bit different. You are getting into area that involves Intel capital, basically the venture capital arm
of Intel. As you can imagine, there’s a lot of technology overlap in the production of silicon wafers for chips and
silicon wafers for solar power. There’s a big gap in terms of what they end up like, but as far as the way they are
manufactured, there’s some technology that Intel capital is making investments in, in the green tech space.
It is not a direct play to migrate Intel’s manufacturing technology to solar manufacturing. It’s a different
process, but there’s obviously some overlap. But those investments are much more about Intel Capital looking
at as good investments – these are good things for the environment, these are going to be good businesses going
forward. They are looking at it much more from that perspective.

Intel is a major user of renewable energy, how has that impacted operations in terms of investment in on site
systems, and managing different power suppliers?

What is happening with renewable projects is we are pursuing, and buying renewable energy credits equivalent
to 1.3 million kWh, which lowers our footprint, but also boosts investment in future investments in renewable
power. These credits are retired, so no one else can have them. If another company wants to use renewable
power, the [renewable energy] industry will have to invest in additional capacity in order supply that company.

As a global company, is it difficult to implement a concerted effort across its global operations in terms of
renewable energy policy (due to different availability levels in different geographical locations?)

There’re always challenges in term of trying to implement some of these policies, when you try to take a global
approach, one environment, one energy policy and globalised it. When we are taking a look at renewable energy
policy and the promotion of renewable energy projects, you are bringing in opportunities from every
geographical area, relationships in China, India, all our sites in the US, Israel, Ireland.

We are asking those sites to bring the best application they can and we bring them to a management review
process, where we sit down and review all these projects, in terms of upfront costs, net costs, benefits to the
local community, benefits to the employees, actual positive environmental impact and based our decisions on
those.

The EU policy context


June 2005, the EC published a strategic framework, i20101. It promotes an open
and competitive digital economy and emphasises ICT as a driver of inclusion and
quality of life as well as a tool for environmental sustainability through clean, low
energy and efficient production processes.
November 2006, the EC published an Energy Efficiency Action Plan intended to
put the EU on course to save 20% of its energy by 2020, to enhance the security
of the energy supply and to reduce its environmental impact.
March 2007, the European Council2 reaffirmed the Community's long-term
commitment to sustainable development, emphasised that the EU is committed to
transforming Europe into a highly energy-efficient and low greenhouse-gasemitting
economy and endorsed a combined climate and energy policy package
with the following EU targets:
‫ ־‬Reduction of GHG emissions in the order of 20% by 2020 compared to 1990
20% ‫ ־‬for renewable energy sources by 2020 compared to the present 6,5%
‫ ־‬Saving 20 % of the EU’s energy consumption compared to projections for
2020

Smart Building and smart consumers


– Buildings are the largest source of CO2 emissions in the EU, they account for about
40% of all energy consumed in the EU and 20 out of 37 European Technology
Platforms addressed energy efficiency in buildings. The lighting sector is of great
relevance: 19% of the global grid based electricity is consumed by lighting per today,
resulting in a greenhouse emission equalling 70% of the emissions of world's
passenger vehicles. Examples of main ICT-based solutions expected in this sector are:
integrated multi-disciplinary solutions for evaluation and design buildings, integrated
building systems and external services for optimal energy management, etc.
– Consumers control or influence 60% of the CO2 emissions. Changes on the scale
needed and at an affordable cost will only happen if consumers, business and
government work together:

También podría gustarte