Está en la página 1de 50

Ingeniamos el futuro

CAMPUS OF
INTERNATIONAL
EXCELLENCE
Eficiencia Energtica ms all del PUE:
Explotando el Conocimiento de la
Aplicacin y los Recursos
!ose M. Moya <[m.moya[upm.es>
LaboraLorlo de SlsLemas lnLegrados
upLo. lngenlerla LlecLrnlca
unlversldad ollLecnlca de Madrld
!ose M.Moya | Madrld, 13 de ocLubre de 2013 1
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Contenido
Mouvacln
Lnfoque acLual
nuesLro enfoque
lanlcacln y gesun
de recursos
Cpumlzacln de
mqulnas vlrLuales
Cesun de modos de
ba[o consumo
ulseno de procesadores
Concluslones
!ose M.Moya | Madrld, 13 de ocLubre de 2013 2
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Motivacin
!ose M.Moya | Madrld, 13 de ocLubre de 2013 3
Consumo energeuco en daLa cenLers
1.3 de la produccln energeuca mundlal en 2010
uSA: 80 m||| MWh]ao en 2011 = 1,3 x n?C
1 daLacenLer medlo = 23 000 casas
Ms de 43 Mlllones de 1oneladas de CC
2
/ ano
(2 mundlal)
Ms agua que la lndusLrla del papel, auLomvll,
peLrleo, madera o plsuco
!onaLhan koomey. 2011. CrowLh ln uaLa cenLer elecLrlclLy use 2003 Lo 2010
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Motivacin
!ose M.Moya | Madrld, 13 de ocLubre de 2013 4
Se espera que la elecLrlcldad
LoLal uullzada por los daLa
cenLers en 2013 exceda los
400 CWh/ano.
Ll consumo de energla de la
refrlgeracln conunuar
Lenlendo una lmporLancla
slmllar o superlor al consumo
de la compuLacln
La opumlzacln energeuca
de los daLa cenLers del fuLuro
requerlr un enfoque global
y mulu-dlsclpllnar.
0
3000
10000
13000
20000
23000
30000
33000
2000 2003 2010
W
o
r
|
d

s
e
r
v
e
r

|
n
s
t
a
|
|
e
d

b
a
s
e

(
t
h
o
u
s
a
n
d
s
)

Plgh-end servers
Mld-range servers
volume servers
0
30
100
130
200
230
300
2000 2003 2010
L
|
e
c
t
r
|
c
|
t
y

u
s
e


(
b
|
|
|
|
o
n

k
W
h
]
y
e
a
r
)

lnfrasLrucLure
Communlcauons
SLorage
Plgh-end servers
Mld-range servers
volume servers
3,73 Mlllones de servldores nuevos / ano
10 de servldores sln uullzar (CC
2
de 6,3
mlllones de coches)
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Problemas de fiabilidad
que dependen de la temperatura
!ose M.Moya | Madrld, 13 de ocLubre de 2013 3
1lme-dependenL
dlelecLrlc-
breakdown (1uu8)
LlecLromlgrauon (LM)
SLress
mlgrauon (SM)
1hermal
cycllng (1C)
!
"
"
"
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Refrigeracin de
un data center
!ose M.Moya | Madrld, 13 de ocLubre de 2013 6
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
vlrLuallzacln


- 27
Servldores conforme a
Lnergy SLar


= 6.300
Me[or planlcacln de
capacldad

2.300
Mejoras en servidores
!ose M.Moya | Madrld, 13 de ocLubre de 2013 7
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
25%
Serv|dores
Refr|gerac|on
lnfraestructura
Serv|dores
Refr|gerac|on
lnfraestructura
5,75 m|||ones de nuevos serv|do-
res se |nsta|an cada ao para
mantenerse a| r|tmo de crec|-
m|ento de |os serv|c|os on-||ne, y
todav|a aprox|madamente e| 10%
de |os serv|dores |nsta|ados no
se ut|||zan deb|do a sobrest|ma-
c|ones conservadoras a |a hora
de p|an|!car |as neces|dades de
a|macenam|ento.
|a energ|a ut|||zada para
|os serv|dores en desuso
podr|a compensar
|as em|s|ones de 6,5
m|||ones de coches.
|os centros de datos
son grandes
Oada ao, |os nuevos serv|dores requ|eren
mas espac|o. |os centros de datos estan |ncre-
mentando su tamao a una tasa de| 10% anua|.
De entre estos, |os de mayor tamao ocupan mas
de c|en m|| metros cuadrados, su!c|ente como
para a|bergar 14 campos de futbo|. Estos centros
consumen mas de 100 veces |a energ|a de un
ed|!c|o de tamao s|m||ar.
Esto equ|va|e a |as em|s|ones con-
tam|nantes de conduc|r desde |a
T|erra a Marte y vue|ta unas 700
veces. A una ve|oc|dad med|a
de 100 k||ometros/hora esto
supondr|a 308.000 aos.
|as bater|as de manten|m|ento
en modo descanso, |os m|cro-
ch|ps y muchos otros compo-
nentes dentro de |os equ|pos
|nformat|cos func|onan con co-
rr|ente cont|nua. Dado que hoy
en d|a |os centros de datos
func|onan en corr|ente a|terna
(como cua|qu|er o!c|na u
hogar} |a energ|a t|ene que ser
convert|da de un t|po a otro
hasta 5 veces dentro de cada
centro de datos.
Es pos|b|e consegu|r |mportantes ahorros de
energ|a med|ante mejoras en e| d|seo y gest|on
de |os serv|dores.
En un centro de datos convenc|ona|, a|gunas de
estas mejoras pueden reduc|r su |mpacto en |as
s|gu|entes c|fras:
Mejorar |a gest|on de |os "ujos de a|re med|ante va-
r|adores de ve|oc|dad para |os vent||adores de refr|-
gerac|on, y mantener |os centros de datos a unos
rangos de temperatura ||geramente mas
amp||os, pero |gua|mente seguros, puede reduc|r
e| consumo de energ|a en un 25%. Para a|-
gunos centros de datos, esta cant|dad equ|va|e
a |a energ|a de 25.000 hogares de EE.
|a |nvers|on de |nc|u|r med|das de
e!c|enc|a energet|ca en |os s|ste-
mas de refr|gerac|on de |os cen-
tros de datos se recupera gene-
ra|mente en menos de dos aos.
Refr|gerar estos equ|pos
supone e| 30% de |a energ|a
ut|||zada en un centro de
datos med|o. Esto s|gn|!ca
que 281 m|||ones de do|ares
se escapan por |a ventana.
na parte |mportante de |os
presupuestos en lT de |as
organ|zac|ones se dest|nan
a |os centros de datos.
(exc|uyendo e| software}
$ $ $ $
lmpacto de |os centros de datos Oportun|dades para |a e!c|enc|a
A menudo |os serv|dores se sobred|mens|onan
para afrontar p|cos de demanda, |o que s|gn|!ca
que como med|a sue|en func|onar so|o a| 20%
de su capac|dad.
281 m|||ones
2.500
|os equ|pos |nformat|cos generan grandes
cant|dades de ca|or
6,5 m|||ones
= 6.500
E| equ|va|ente a ret|rar 6.500
coches de |as carreteras, me-
d|ante |a ut|||zac|on de serv|dores
acordes a Energy Star, |o que re-
duc|r|a e| consumo e|ectr|co de
|os centros de datos en 82.000
megavat|os-hora.
Oonsumo
de energ|a
por m
2
5
OA OO

1
OA OO

T|erra
Marte
10% no se ut|||zan
S
2 aos
2013
t|||zar un d|seo en corr|ente cont|nua para
|os centros de datos, puede e||m|nar equ|pos
|nnecesar|os y reduc|r |as perd|das de convers|on
de |a energ|a en un 20%.
S| |os centros de datos de|
mundo m|grasen a |as nuevas
tecno|og|as d|spon|b|es para
e| sum|n|stro e|ectr|co en
corr|ente cont|nua, que t|ene
una e!c|enc|a de hasta e|
97%, e| ahorro de energ|a
anua| ser|a su!c|ente para
cargar un |Pad durante un
per|odo de t|empo muy muy
|argo:
As|m|smo, un centro de datos convenc|ona| puede
ahorrar hasta 47 m|||ones de do|ares en gastos |n-
mob|||ar|os e||m|nando e| espac|o que neces|tan
|os equ|pos de convers|on de corr|ente a|terna.
O!c|na
Oentro de datos
para e| retorno
700

70
m|||ones de aos
27%
Reduc|r un 27% e| consumo
energet|co med|ante |a v|rtua||-
zac|on, |o que reduce |a capac|-
dad product|va no emp|eada.
E| equ|va|ente a |a energ|a consum|da por 2.500
hogares en EE, med|ante una mejor p|an|!ca-
c|on de |a capac|dad.
|as |nfraestructuras en co-
rr|ente cont|nua perm|ten
ademas una mejor
|ntegrac|on con energ|as
renovab|es como |a so|ar
fotovo|ta|ca, que generan
energ|a en este t|po de co-
rr|ente.
Esto s|gn|!ca que e| 20% de |a energ|a se desperd|c|a
en |a |nfraestructura de |os centros de datos.
Energ|a para |nternet
Oada vez que cargas un v|deo en |nternet, compartes una foto, env|as un correo
e|ectron|co o rev|sas tus cuentas bancar|as, tu d|spos|t|vo de acceso a |a red se
pone en contacto con un centro de datos. H||eras de serv|dores que a|macenan
b|||ones de megabytes de |nformac|on se concentran en enormes centros de
datos que consumen grandes cant|dades de energ|a y que a||mentan |a web.
|os centros de datos son |os responsab|es de| 2% de |as em|s|ones
g|oba|es de d|ox|do de carbono y ut|||zan anua|mente 80 m|||ones
de megavat|os-hora de e|ectr|c|dad, cas| |o m|smo que 1,5 veces
|a cant|dad de e|ectr|c|dad ut|||zada por |a c|udad de Nueva York.
En 2020, a| r|tmo de crec|m|ento actua| y s|n mejoras en e!c|enc|a
energet|ca, |os centros de datos produc|ran:
|os centros de datos
E| |mpacto energet|co de |os centros de datos
25.000 casas de EE
1 centro de datos =
48% 359 m|| tone|adas de
coches
en EE
=
1,5 x NYO
=
de consumo e|ectr|co
!
"
#
!"
"#
!#
!"#
$
%&'(()*+,+!-.,-/)0.123/+4506)789:886;<+0 > 8>?8:?>: >:@AA
un SCLC
CLn18C
uL uA1CS
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Mejoras en refrigeracin
Me[oras en gesun de u[os de alre y rangos
de LemperaLura llgeramenLe ms ampllos
!ose M.Moya | Madrld, 13 de ocLubre de 2013 8
8educcln del consumo
hasLa un 23
23.000
8ecuperacln de la lnversln
en solo 2 anos

un SCLC
CLn18C
uL uA1CS
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
CA ! CC
20 reduccln de perdldas de conversln
47 mlllones de dlares de gasLos
lnmoblllarlos por daLa cenLer
Mayor eclencla, ahorro de energla suclenLe para
cargar un lad duranLe 70 mlllones de anos
Mejoras en infraestructura
!ose M.Moya | Madrld, 13 de ocLubre de 2013 9
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
3
EIectricity consumption
Figure from [EPA07]
!!"
Mejores prcticas de
eficiencia energtica
!ose M.Moya | Madrld, 13 de ocLubre de 2013 10
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Potencial de mejora
con mejores prcticas
!ose M.Moya | Madrld, 13 de ocLubre de 2013 11
To appear in Computer Networks, Special Issue on Virtualized Data Centers. All rights reserved.
Co l d
Ai sl e
col d ai r f r om f l oor
Ra c k s Ra c k s
Hot Ai sl e Hot Ai sl e
he a t
r eci r cul at i on
Fig. 2. Demonstration of heat recirculation: heated air in the hot aisle
loops around the equipment to enter the air inlets.
0
200
400
600
800
1000
1200
1400
0 20 40 60 80 100
P
o
w
e
r

(
K
W
)
job size relative to data center capacity (%)
Total power (computing and cooling) for various scheduling approaches
savings by minimizing computing power
savings by minimizing the recirculations effect
savings by turning off idle machines
unaddressed heat recirculation cost
basic (unavoidable) cost
max computing power, worst thermal placement
min computing power, worst thermal placemenit
optimal computing+cooling
optimal computing+cooling, shut off idles
optimal computing+cooling, shut off idles, no recirculation
Fig. 3. Data center operation cost (in kilowatts) for various savings
modes. Savings are based on heat recirculation data obtained by
FloVENT simulation of the ASU Fulton HPCI data center.
to cost savings, some research has included economical models to computing schedules [18]. Most of the schedulers
implement the rst t policy in job placement, mainly for reasons of low overhead.
In data centers, the majority of energy ineciency is attributed to idle running servers and to heat recirculation [2]
(Figure 2). Solutions such as low-voltage ICs [10], ensemble-level power management [11] and energy-ecient de-
sign of data centers [12] which try to address these causes of ineciency by reducing the inherent heat generation.
Power control schemes, such as Freon-EC (an extension to the Freon power-aware management software which adds
power control [4, 5]), or using energy-proportional systems [16] can help in addressing the energy ineciency due
to idle servers. To address heat recirculation, energy-ecient thermal-aware spatial scheduling algorithms have been
proposed, such as MinHR and XInt [13]. Spatial scheduling can avoid or even prevent excessive heat conditions, while
it can greatly reduce the cooling costs, which account for a large portion (about one third) of a data centers utility.
To demonstrate the magnitude of savings achieved by thermal-aware placement, and the thermal-aware placements
complementary relation to power control schemes, we provide Figure 3, which was produced using XInt and numer-
ical results from previous spatial scheduling work on thermal-aware placement [3]: the top two lines in the gure
represent the most energy-consuming schedules, and they were produced using a variant of XInt that maximizes the
thermal ineciency of the data center; the third line is the XInt curve as obtained in the previous work [3]; the fourth
line represents the combined use of XInt and turning o idle servers, and it was obtained by removing the power
consumption of all un-assigned servers from the XInt line; the bottom line represents the power consumption without
heat recirculation. The gure shows that explicit power control and thermal-aware job placement are mutually com-
plementary, with the former showing the most savings at low data center utilization rates and the latter at moderate to
high (but not full) data center utilization rates. The gure also shows that power-aware yet heat-oblivious approaches
that minimize only the computing power (second line from the top) do not save as much as thermal-aware approaches
do.
The aforementioned thermal-aware job scheduling algorithms try to optimize the spatial scheduling (i.e. placement)
6
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
PUE
Power Usage Effectiveness
LsLado del arLe: uL = 1,2
La parLe lmporLanLe es el consumo de compuLacln
Ll Lraba[o en eclencla energeuca en uC esL
cenLrado en la reduccln del uL
8educlr !
"#
no reduce el uL, pero se noLa en la
facLura de la luz
Cmo se puede reduclr !
"#
?
!ose M.Moya | Madrld, 13 de ocLubre de 2013 12
!"# !
!
!"#$
!
!
!"!#$
!
!"
!
!
!"#$%&!'(
!!
!"#!$%"!&'$()
!!
!"#$
!
!"#$%

Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Ahorro energtico segn el
nivel de abstraccin
!ose M.Moya | Madrld, 13 de ocLubre de 2013 13
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Nuestro enfoque
Lstrateg|a g|oba| para permlur la uullzacln de
muluples fuenLes de lnformacln y para coordlnar las
declslones con el n de reduclr el consumo LoLal
uullzacln del conoc|m|ento de las caracLerlsucas de
demanda energnca de |as ap||cac|ones y las
caracter|sncas de |os recursos de compuLacln y
refrlgeracln para apllcar tcn|cas proacnvas de
opumlzacln
!ose M.Moya | Madrld, 13 de ocLubre de 2013 14
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Optimizacin proactiva
!ose M.Moya | Madrld, 13 de ocLubre de 2013 13
uaLacenLer
Model Cpumlzauon
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Optimizacin proactiva
!ose M.Moya | Madrld, 13 de ocLubre de 2013 16
Datacenter
Workload
Model
S
e
n
s
o
r
s

A
c
t
u
a
t
o
r
s

Sensor
configuration
Visualization
Power Model
Energy Model
Thermal Model
Dynamic
Cooling Opt.
Resource Alloc.
Opt.
Global DVFS
VM Opt.
A
n
o
m
a
l
y

D
e
t
e
c
t
i
o
n

a
n
d

R
e
p
u
t
a
t
i
o
n

S
y
s
t
e
m
s

Communication network
Sensor network
Workload
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Sensores
!ose M.Moya | Madrld, 13 de ocLubre de 2013 17
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Enfoque holstico
!ose M.Moya | Madrld, 13 de ocLubre de 2013 18
Ch|p Server kack koom Mu|n-
room
Sched & a||oc 2 1
app
CS]m|dd|eware
Comp||er]VM 3 3
arch|tecture 4 4
techno|ogy 3
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
1. Gestin de recursos
en la sala
!ose M.Moya | Madrld, 13 de ocLubre de 2013 19
Ch|p Server kack koom Mu|n-
room
Sched & a||oc 2 1
app
CS]m|dd|eware
Comp||er]VM 3 3
arch|tecture 4 4
techno|ogy 3
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Aprovechando la heterogeneidad
uullzacln de la heLerogeneldad para mlnlmlzar el
consumo energeuco desde un punLo de vlsLa
esLuco/dlnmlco
Lstnco: LnconLrar el me[or seL-up del daLacenLer,
dado un numero heLerogeneo de mqulnas
D|nm|co: Cpumlzacln de la aslgnacln de Lareas en
el 8esource Manager
uemosLramos que la me[or solucln se encuenLra
en un daLacenLer heLerogeneo
Muchos daLacenLers son heLerogeneos (dlversas
generaclones de mqulnas)
20
CCGrid 2012
!ose M.Moya | Madrld, 13 de ocLubre de 2013
M. ZapaLer, !.M. Moya, !.L. Ayala. Leveraglng PeLerogenelLy for
Lnergy Mlnlmlzauon ln uaLa CenLers, CCCrld 2012
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Escenario actual
21
WCkkLCAD
Schedu|er
kesource
Manager
Lxecunon
!ose M.Moya | Madrld, 13 de ocLubre de 2013
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Planificacin y asignacin de recursos
consciente de la refrigeracin
!ose M.Moya | Madrld, 13 de ocLubre de 2013 22
To appear in Computer Networks, Special Issue on Virtualized Data Centers. All rights reserved.
0
50
100
150
200
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (a) 40 jobs, 25014 core-hours, idle servers on
Throughput
Turnaround time
Alg. runtime
Energy savings
0.197 jobs/hr 0.197 jobs/hr 0.172 jobs/hr 0.197 jobs/hr 0.163 jobs/hr
18.41 hr 18.41 hr 20.75 hr 18.41 hr 51.75 hr
3.4 ms 6.9 ms 213 ms 23 min 40 min
0% 6.2% 8.6% 8.7% 10.2%
cooling energy
computing energy
(a)
0
5
10
15
20
25
30
35
40
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (a) 40 jobs, 25014 core-hours, idle servers off
Throughput
Turnaround time
Alg. runtime
Energy savings
0.197 jobs/hr 0.197 jobs/hr 0.172 jobs/hr 0.197 jobs/hr 0.163 jobs/hr
18.41 hr 18.41 hr 20.75 hr 18.41 hr 38.02 hr
3.4 ms 6.9 ms 213 ms 23 min 43 min
0% 11.8% 54.7% 21.8% 60.5%
cooling energy
computing energy
(b)
0
50
100
150
200
250
300
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (b) 120 jobs, 16039 core-hours, idle servers on
Throughput
Turnaround time
Alg. runtime
Energy savings
0.580 jobs/hr 0.580 jobs/hr 0.349 jobs/hr 0.580 jobs/hr 0.254 jobs/hr
8.98 hr 8.98 hr 12.17 hr 8.98 hr 48.49 hr
170 ms 186 ms 397 ms 40.8 min 88.6 min
0% 1.7% 4.1% 3.6% 4.7%
cooling energy
computing energy
(c)
0
5
10
15
20
25
30
35
40
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (b) 120 jobs, 16039 core-hours, idle servers off
Throughput
Turnaround time
Alg. runtime
Energy savings
0.580 jobs/hr 0.580 jobs/hr 0.349 jobs/hr 0.580 jobs/hr 0.427 jobs/hr
8.98 hr 8.98 hr 12.17 hr 8.98 hr 17.75 hr
171 ms 186 ms 397 ms 42 min 100 min
0% 4.0% 14.6% 14.2% 15.1%
cooling energy
computing energy
(d)
0
50
100
150
200
250
300
350
400
450
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (c) 174 jobs, 45817 core-hours, idle servers on
Throughput
Turnaround time
Alg. runtime
Energy savings
0.892 jobs/hr 0.892 jobs/hr 0.861 jobs/hr 0.892 jobs/hr 0.561 jobs/hr
9.99 hr 9.99 hr 13.39 hr 9.99 hr 65.38 hr
173 ms 196 ms 346 ms 20 min 142 min
0% 2.5% 5.9% 9.4% 12.5%
cooling energy
computing energy
(e)
0
20
40
60
80
100
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (c) 174 jobs, 45817 core-hours, idle servers off
Throughput
Turnaround time
Alg. runtime
Energy savings
0.892 jobs/hr 0.892 jobs/hr 0.861 jobs/hr 0.892 jobs/hr 0.590 jobs/hr
9.99 hr 9.99 hr 13.39 hr 9.99 hr 61.49 hr
173 ms 191 ms 346 ms 21 min 147 min
0.0% 7.5% 17.3% 25.7% 41.4%
cooling energy
computing energy
(f)
Fig. 8. Energy comparison of the simulated schemes for the three scenarios. The plots correspond in respective positions to the plots of Figure 7.
policy used in the data center, which enables job execution as soon as they arrive if the queue is empty and the data
center is lightly loaded. In the idle-on case (Figure 8a), the total energy consumption using SCINT, EDF-LRH,
FCFS-Backll-FF, FCFS-Backll-LRH and XInt placement is 131.3 GJ, 133.9 GJ, 146.2 GJ, 139.3 GJ and 139.3 GJ,
respectively. However, in the idle-o case, the energy consumption reduces to 10.3,GJ, 11.8 GJ, 26.1 GJ, 24.4 GJ,
and 24.6 GJ, respectively. Notice that the savings exceed 80% for any approach. The savings is achieved by: (i) the
22
To appear in Computer Networks, Special Issue on Virtualized Data Centers. All rights reserved.
0
50
100
150
200
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (a) 40 jobs, 25014 core-hours, idle servers on
Throughput
Turnaround time
Alg. runtime
Energy savings
0.197 jobs/hr 0.197 jobs/hr 0.172 jobs/hr 0.197 jobs/hr 0.163 jobs/hr
18.41 hr 18.41 hr 20.75 hr 18.41 hr 51.75 hr
3.4 ms 6.9 ms 213 ms 23 min 40 min
0% 6.2% 8.6% 8.7% 10.2%
cooling energy
computing energy
(a)
0
5
10
15
20
25
30
35
40
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (a) 40 jobs, 25014 core-hours, idle servers off
Throughput
Turnaround time
Alg. runtime
Energy savings
0.197 jobs/hr 0.197 jobs/hr 0.172 jobs/hr 0.197 jobs/hr 0.163 jobs/hr
18.41 hr 18.41 hr 20.75 hr 18.41 hr 38.02 hr
3.4 ms 6.9 ms 213 ms 23 min 43 min
0% 11.8% 54.7% 21.8% 60.5%
cooling energy
computing energy
(b)
0
50
100
150
200
250
300
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (b) 120 jobs, 16039 core-hours, idle servers on
Throughput
Turnaround time
Alg. runtime
Energy savings
0.580 jobs/hr 0.580 jobs/hr 0.349 jobs/hr 0.580 jobs/hr 0.254 jobs/hr
8.98 hr 8.98 hr 12.17 hr 8.98 hr 48.49 hr
170 ms 186 ms 397 ms 40.8 min 88.6 min
0% 1.7% 4.1% 3.6% 4.7%
cooling energy
computing energy
(c)
0
5
10
15
20
25
30
35
40
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (b) 120 jobs, 16039 core-hours, idle servers off
Throughput
Turnaround time
Alg. runtime
Energy savings
0.580 jobs/hr 0.580 jobs/hr 0.349 jobs/hr 0.580 jobs/hr 0.427 jobs/hr
8.98 hr 8.98 hr 12.17 hr 8.98 hr 17.75 hr
171 ms 186 ms 397 ms 42 min 100 min
0% 4.0% 14.6% 14.2% 15.1%
cooling energy
computing energy
(d)
0
50
100
150
200
250
300
350
400
450
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (c) 174 jobs, 45817 core-hours, idle servers on
Throughput
Turnaround time
Alg. runtime
Energy savings
0.892 jobs/hr 0.892 jobs/hr 0.861 jobs/hr 0.892 jobs/hr 0.561 jobs/hr
9.99 hr 9.99 hr 13.39 hr 9.99 hr 65.38 hr
173 ms 196 ms 346 ms 20 min 142 min
0% 2.5% 5.9% 9.4% 12.5%
cooling energy
computing energy
(e)
0
20
40
60
80
100
FCFS-FF FCFS-LRH EDF-LRH FCFS-Xint SCINT
e
n
e
r
g
y

c
o
n
s
u
m
e
d

(
G
J
)
Energy consumption, Scenario (c) 174 jobs, 45817 core-hours, idle servers off
Throughput
Turnaround time
Alg. runtime
Energy savings
0.892 jobs/hr 0.892 jobs/hr 0.861 jobs/hr 0.892 jobs/hr 0.590 jobs/hr
9.99 hr 9.99 hr 13.39 hr 9.99 hr 61.49 hr
173 ms 191 ms 346 ms 21 min 147 min
0.0% 7.5% 17.3% 25.7% 41.4%
cooling energy
computing energy
(f)
Fig. 8. Energy comparison of the simulated schemes for the three scenarios. The plots correspond in respective positions to the plots of Figure 7.
policy used in the data center, which enables job execution as soon as they arrive if the queue is empty and the data
center is lightly loaded. In the idle-on case (Figure 8a), the total energy consumption using SCINT, EDF-LRH,
FCFS-Backll-FF, FCFS-Backll-LRH and XInt placement is 131.3 GJ, 133.9 GJ, 146.2 GJ, 139.3 GJ and 139.3 GJ,
respectively. However, in the idle-o case, the energy consumption reduces to 10.3,GJ, 11.8 GJ, 26.1 GJ, 24.4 GJ,
and 24.6 GJ, respectively. Notice that the savings exceed 80% for any approach. The savings is achieved by: (i) the
22
iMPACT Lab (Arizona State U)
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Planificacin y asignacin de
recursos consciente de la aplicacin
23
LSI-UPM
WCkkLCAD
kesource
Manager
(SLUkM)
Lxecunon
roh||ng and
C|ass|hcanon
Lnergy
Cpnm|zanon
!ose M.Moya | Madrld, 13 de ocLubre de 2013
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Planificacin y asignacin de
recursos consciente de la aplicacin
Workload:
12 Lareas del benchmark SpecCu 2006
Workload aleaLorlo de 2000 Lareas, dlvldldo en [ob seLs
1lempo de llegada aleaLorlo enLre [ob seLs
Servldores:
24
Escenario
!ose M.Moya | Madrld, 13 de ocLubre de 2013
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Planificacin y asignacin de
recursos consciente de la aplicacin
23
Profiling de energa
WCkkLCAD
kesource
Manager
(SLUkM)
Lxecunon
roh||ng and
C|ass|hcanon
Lnergy
Cpnm|zanon
Lnergy proh||ng
!ose M.Moya | Madrld, 13 de ocLubre de 2013
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Caracterizacin de la
carga de trabajo
!ose M.Moya | Madrld, 13 de ocLubre de 2013 26
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Trabajo realizado
27
Optimizaciones
WCkkLCAD
kesource
Manager
(SLUkM)
Lxecunon
roh||ng and
C|ass|hcanon
Lnergy
Cpnm|zanon
Lnergy M|n|m|zanon:
M|n|m|zanon sub[ected to constra|nts
MIL prob|em (so|ved w|th CLLk)
Stanc and Dynam|c
!ose M.Moya | Madrld, 13 de ocLubre de 2013
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Planificacin y asignacin de
recursos consciente de la aplicacin
28
Optimizacin esttica
uenlcln del daLacenLer pumo
uado un pool de 100 mqulnas de cada
1 [ob seL del workload
Ll opumlzador escoge los me[ores servldores
ConsLralnLs de presupuesLo y espaclo
Me[or so|uc|n:
40 Sparc
27 AMD

Ahorros:
S a 22 en energ|a
30 nempo
!ose M.Moya | Madrld, 13 de ocLubre de 2013
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Planificacin y asignacin de
recursos consciente de la aplicacin
29
Optimizacin dinmica
Aslgnacln puma del workload
uso del workload compleLo (2000 Lareas)
Ll algorlLmo encuenLra una buena aslgnacln (no la me[or)
en Lermlnos de energla
L[ecucln del algorlLmo en runume
Ahorros de| 24
a| 47 en energ|a
!ose M.Moya | Madrld, 13 de ocLubre de 2013
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Planificacin y asignacin de
recursos consciente de la aplicacin
rlmera prueba de concepLo en cuanLo a ahorros
energeucos graclas a heLerogeneldad
Solucln auLomuca
La solucln auLomuca de procesadores ofrece
noLables ahorros energeucos.
La solucln puede ser fcllmenLe lmplemenLable en
un enLorno real
uso del 8esource Manager SLu8M
Workloads y servldores ms reallsLas
30
Conclusiones
!ose M.Moya | Madrld, 13 de ocLubre de 2013
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
2. Gestin de recursos
en el servidor
!ose M.Moya | Madrld, 13 de ocLubre de 2013 31
Ch|p Server kack koom Mu|n-
room
Sched & a||oc 2 1
app
CS]m|dd|eware
Comp||er]VM 3 3
arch|tecture 4 4
techno|ogy 3
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Polticas de planificacin y
asignacin de recursos en MPSoCs
A. Coskun , 1. 8oslng , k. WhlsnanL and k. Cross "SLauc and dynamlc LemperaLure-
aware schedullng for muluprocessor SoCs", lLLL 1rans. very Large Scale lnLegr. SysL.,
vol. 16, no. 9, pp.1127 -1140 2008
!ose M.Moya | Madrld, 13 de ocLubre de 2013 32
1136 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 16, NO. 9, SEPTEMBER 2008
TABLE VI
SUMMARY OF EXPERIMENTAL RESULTS
Fig. 3. Distribution of thermal hot spots, with DPM (ILP).
A. Static Scheduling Techniques
We next provide an extensive comparison of the ILP based
techniques. We refer to our static approach as Min-Th&Sp.
As discussed in Section III, we implemented the ILP for min-
imizing thermal hot spots (Min-Th), energy balancing (Bal-
En), and energy minimization (Min-En) to compare against
our approach. To the best of our knowledge, this is the rst time
in the literature static MPSoC scheduling techniques are com-
pared extensively to evaluate their thermal behavior.
We rst show average results over all the benchmarks. Fig. 3
demonstrates the percentage of time spent at certain temperature
intervals for the case with DPM. The gure shows that Min-
Th&Sp achieves a higher reduction of hot spots in comparison
to the other energy and temperature-based ILPs. The reason for
this is that, avoiding clustering of workload in neighbor cores
reduces the heating on the die, resulting in lower temperatures.
Fig. 4 shows the distribution of spatial gradients for the av-
erage case with DPM. In this plot, we can observe howMin-Th
increases the percentage of high differentials while reducing
Fig. 4. Distribution of spatial gradients, with DPM (ILP).
hot spots. While Min-Th reduces the high spatial differentials
above 15 C, we observe a substantial increase in the spatial
gradients above 10 C. In contrast, our method achieves lower
and more balanced temperature distribution in the die.
In Fig. 5, we showhowthe magnitudes of thermal cycles vary
with the scheduling method. We demonstrate the average per-
centage of time the cores experience temporal variations of cer-
tain magnitudes. As can be observed in Fig. 5, Min-Th&Sp
reduces the thermal cycles of magnitude 20 C and higher sig-
nicantly. The temporal uctuations above 15 C are reduced
in comparison to other static techniques, except for Min-En.
The cycles above 15 C (total) occur 17.3% and 19.2% of the
time for Min-Th&Sp and Min-En, respectively. Our formu-
lation targets reducing the frequency of highest magnitude of
hot spots and temperature variations, therefore, such slight in-
creases with respect to Min-En are possible.
In the plots discussed before and also in Table VI, we ob-
serve that the Min-Th&Sp technique successfully reduces hot
spots as well as the spatial and temporal uctuations. Power
1136 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 16, NO. 9, SEPTEMBER 2008
TABLE VI
SUMMARY OF EXPERIMENTAL RESULTS
Fig. 3. Distribution of thermal hot spots, with DPM (ILP).
A. Static Scheduling Techniques
We next provide an extensive comparison of the ILP based
techniques. We refer to our static approach as Min-Th&Sp.
As discussed in Section III, we implemented the ILP for min-
imizing thermal hot spots (Min-Th), energy balancing (Bal-
En), and energy minimization (Min-En) to compare against
our approach. To the best of our knowledge, this is the rst time
in the literature static MPSoC scheduling techniques are com-
pared extensively to evaluate their thermal behavior.
We rst show average results over all the benchmarks. Fig. 3
demonstrates the percentage of time spent at certain temperature
intervals for the case with DPM. The gure shows that Min-
Th&Sp achieves a higher reduction of hot spots in comparison
to the other energy and temperature-based ILPs. The reason for
this is that, avoiding clustering of workload in neighbor cores
reduces the heating on the die, resulting in lower temperatures.
Fig. 4 shows the distribution of spatial gradients for the av-
erage case with DPM. In this plot, we can observe howMin-Th
increases the percentage of high differentials while reducing
Fig. 4. Distribution of spatial gradients, with DPM (ILP).
hot spots. While Min-Th reduces the high spatial differentials
above 15 C, we observe a substantial increase in the spatial
gradients above 10 C. In contrast, our method achieves lower
and more balanced temperature distribution in the die.
In Fig. 5, we showhowthe magnitudes of thermal cycles vary
with the scheduling method. We demonstrate the average per-
centage of time the cores experience temporal variations of cer-
tain magnitudes. As can be observed in Fig. 5, Min-Th&Sp
reduces the thermal cycles of magnitude 20 C and higher sig-
nicantly. The temporal uctuations above 15 C are reduced
in comparison to other static techniques, except for Min-En.
The cycles above 15 C (total) occur 17.3% and 19.2% of the
time for Min-Th&Sp and Min-En, respectively. Our formu-
lation targets reducing the frequency of highest magnitude of
hot spots and temperature variations, therefore, such slight in-
creases with respect to Min-En are possible.
In the plots discussed before and also in Table VI, we ob-
serve that the Min-Th&Sp technique successfully reduces hot
spots as well as the spatial and temporal uctuations. Power
UCSD System Energy Efficiency Lab
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Planificacin y asignacin de
recursos consciente de la aplicacin
La caracLerlzacln energeuca de las
apllcaclones permlLe la denlcln de pollucas
proacuvas de planlcacln y aslgnacln de
recursos que m|n|m|zan |os !"#$%"#$
La reduccln de %&'()&'( permlLe aumentar |a
temperatura del alre de los slsLemas de
refrlgeracln
!ose M.Moya | Madrld, 13 de ocLubre de 2013 33
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
3. Mquina virtual consciente
de la aplicacin y los recursos
!ose M.Moya | Madrld, 13 de ocLubre de 2013 34
Ch|p Server kack koom Mu|n-
room
Sched & a||oc 2 1
app
CS]m|dd|eware
Comp||er]VM 3 3
arch|tecture 4 4
techno|ogy 3
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Compilacin JIT en
mquinas virtuales
La mqulna vlrLual
complla (!l1) la
apllcacln a cdlgo
nauvo por eclencla
Ll opumlzador es
genr|co y orlenLado a
la opnm|zac|n de
rend|m|ento
!ose M.Moya | Madrld, 13 de ocLubre de 2013 33
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
8ack-end
Compilacin JIT para
reduccin de energa
Compllador consclenLe de la apllcacln
CaracLerlzacln de apllcaclones y Lransformaclones
Cpumlzador dependlenLe de la apllcacln
vlsln global de la carga de Lraba[o del daLa cenLer
Cpumlzador de energla
Ln la acLualldad, los complladores para procesadores
de alLas presLaclones solo opumlzan rendlmlenLo
!ose M.Moya | Madrld, 13 de ocLubre de 2013 36
lronL-end
Cpumlzador
Cenerador de
cdlgo
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Potencial de ahorro desde
el compilador (MPSoCs)
1. Slmunlc, C. de Mlchell, L. 8enlnl, and M. Pans. Source code opumlzauon and
prollng of energy consumpuon ln embedded sysLems," lnLernauonal Symposlum on
SysLem SynLhesls, pages 193 - 199, SepL. 2000
8educcln de un 77 de energla en un
decodlcador M3
lLl, ?., 8Avl, S., 8ACPunA1PAn, A., Anu !PA, n. k. 2004. Lnergy-opumlzlng source
code Lransformauons for CS-drlven embedded soware. ln roceedlngs of Lhe
lnLernauonal Conference vLSl ueslgn. 261-266.
PasLa el 37,9 (medla 23,8) de ahorro
energeuco en programas muluproceso sobre
Llnux
!ose M.Moya | Madrld, 13 de ocLubre de 2013 37
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
4. Gestin automtica de
frecuencia a nivel global
!ose M.Moya | Madrld, 13 de ocLubre de 2013 38
Ch|p Server kack koom Mu|n-
room
Sched & a||oc 2 1
app
CS]m|dd|eware
Comp||er]VM 3 3
arch|tecture 4 4
techno|ogy 3
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
DVFS Dynamic Voltage
and Frequency Scaling
Al decremenLar la Lensln de allmenLacln, la
poLencla se reduce cuadrucamenLe (a
frecuencla consLanLe)
Ll reLardo se lncremenLa solo llnealmenLe
La frecuencla mxlma Lamblen se decremenLa
llnealmenLe
AcLualmenLe los modos de ba[o consumo se
acuvan por lnacuvldad del slsLema operauvo
de un servldor
!ose M.Moya | Madrld, 13 de ocLubre de 2013 39
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
DVFS a nivel de sala
ara mlnlmlzar el consumo hay que mlnlmlzar
los camblos de modo
LxlsLen algorlLmos pumos para un con[unLo
conocldo de Lareas (?uS)
Ll conoclmlenLo de la carga de Lraba[o
permlLe planlcar los modos de ba[o consumo
a nlvel global sln perdlda de rendlmlenLo
!ose M.Moya | Madrld, 13 de ocLubre de 2013 40
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Paralelismo para ahorrar
energa
Use of Parallelism Use of Parallelism
!
""
!
""
#$ !
""
#$
! ! #$ ! #$ !
%&'
!
%&'
#$ !
%&'
#$
9-17
Swiss Federal
Institute of Technology
Computer Engineering
and Networks Laboratory
Use of Pipelining Use of Pipelining
!
""
#$
! ! #$
!
""
! #$
!
%&'
!
%&'
#$
!
""
#$
!
%&'
#$
9-18
Swiss Federal
Institute of Technology
Computer Engineering
and Networks Laboratory
!ose M.Moya | Madrld, 13 de ocLubre de 2013 41
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Resultados de simulacin
!ose M.Moya | Madrld, 13 de ocLubre de 2013 42
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
5. Emplazamiento de cores
consciente de la temperatura
!ose M.Moya | Madrld, 13 de ocLubre de 2013 43
Ch|p Server kack koom Mu|n-
room
Sched & a||oc 2 1
app
CS]m|dd|eware
Comp||er]VM 3
arch|tecture 4 4
techno|ogy 5
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Floorplanning consciente
de la temperatura
!ose M.Moya | Madrld, 13 de ocLubre de 2013 44
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Potencial de ahorro
energtico por floorplaning
8educclones de LemperaLura mxlma de hasLa 21
o
C
Medla: -12
o
C en LemperaLura mxlma
Mayor reduccln en los casos ms crlucos
!ose M.Moya | Madrld, 13 de ocLubre de 2013 43
Temperature Reductions
!"#$%&# (%)*#+, $#-./01234 !"
$
%
5%$&#$ 0#+,#$%0.$# $#-./01236 72$ 8#3/9+%$:6
;109 91&9#$ +%)1+.+ 0#+,#$%0.$#
<2$ +%3= 8#3/9+%$:6> 0#+,#$%0.$# $#-./1236 %$#
?%$&#$ 09%3 "&
$
%
Maximum Temperature
0
20
40
60
80
100
120
140
a
m
m
p
a
p
p
l
u
a
p
s
i
a
r
t
b
z
i
p
2
c
r
a
f
t
y
e
o
n
e
q
u
a
k
e
f
a
c
e
r
e
c
f
m
a
3
d
g
a
p
g
c
c
g
z
i
p
l
u
c
a
s
m
c
f
m
e
s
a
m
g
r
i
d
p
a
r
s
e
r
p
e
r
l
b
m
k
s
w
i
m
t
w
o
l
f
v
o
r
t
e
x
v
p
r
w
u
p
w
i
s
e
a
v
g
original modified
?. Pan, l. koren, and C. A. MorlLz. 1emperaLure Aware lloorplannlng. ln roc. of Lhe
Second Workshop on 1emperaLure-Aware CompuLer SysLems, !une 2003
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Floorplanning consciente de la
temperatura en chips 3D
!ose M.Moya | Madrld, 13 de ocLubre de 2013 46
Ll clrculLo lnLegrado 3u esL reclblendo
aLencln:
! ! Lscalado: reduce rea 2u equlvalenLe
! ! 8endlmlenLo: menor longlLud de
comunlcaclones
! llabllldad: menor cableado

uesvenLa[a:
*+,-.'/. 012(34/,-.'- 5&( )64&( 0-
'-,)-1/'+1/ 4&. 1-()-4'& / 5&( 06(-7&( 89
-:+6;/5-.'-(
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Floorplanning consciente
de la temperatura
!ose M.Moya | Madrld, 13 de ocLubre de 2013 47
8educcln de hasLa 30
o
C por capa en un chlp
3u de 4 capas y 48 cores
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Y todava hay ms
SmarL Crlds
Consumlr cuando nadle consume
8educlr el consumo cuando Lodo el mundo
consume
8educcln de la facLura de luz
CosLe dependlenLe del horarlo
CoeclenLe de energla reacuva
lcos de consumo
!ose M.Moya | Madrld, 13 de ocLubre de 2013 48
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Conclusiones
8educlr el uL no es lo mlsmo que reduclr el consumo
Ll consumo de compuLacln es domlnanLe en daLa cenLers
modernos
Ll conoc|m|ento de la apllcacln y de los recursos puede
ser uullzado para esLablecer po||ncas proacnvas para
reduclr la energla LoLal
A Lodos los nlveles
Ln Lodos los mblLos
Conslderando slmulLneamenLe compuLacln y refrlgeracln
La gesun adecuada del conoclmlenLo del comporLamlenLo
Lermlco del daLa cenLer permlLe reduclr los problemas de
hab|||dad
8educlr el consumo LoLal no es lo mlsmo que reduclr la
facLura de la luz
!ose M.Moya | Madrld, 13 de ocLubre de 2013 49
Ingeniamos el futuro
CAMPUS OF
INTERNATIONAL
EXCELLENCE
Contacto
!ose M.Moya | Madrld, 13 de ocLubre de 2013 30
Jos M. Moya
Laboratorio de Sistemas Integrados
+34 607 082 892
[m.moya[upm.es
Craclas:

También podría gustarte