A Grossir’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


Experience of public procurement of Open Compute servers
  • Article
  • Full-text available

December 2015

·

56 Reads

·

3 Citations

Journal of Physics Conference Series

Olof Bärring

·

Marco Guerri

·

Eric Bonfillou

·

[...]

·

Anthony Grossir

The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

Download

Figure 1: Integration of ITCM with PRMS and HMS  
Migration of the CERN IT Data Centre Support System to ServiceNow

June 2014

·

407 Reads

·

8 Citations

Journal of Physics Conference Series

The large potential and flexibility of the ServiceNow infrastructure based on "best practises" methods is allowing the migration of some of the ticketing systems traditionally used for the monitoring of the servers and services available at the CERN IT Computer Centre. This migration enables the standardization and globalization of the ticketing and control systems implementing a generic system extensible to other departments and users. One of the activities of the Service Management project together with the Computing Facilities group has been the migration of the ITCM structure based on Remedy to ServiceNow within the context of one of the ITIL processes called Event Management. The experience gained during the first months of operation has been instrumental towards the migration to ServiceNow of other service monitoring systems and databases. The usage of this structure is also extended to the service tracking at the Wigner Centre in Budapest.


Figure 1: layout of the Wigner data centre machine rooms. The availability of the three blocks for CERN usage is indicated.  
Figure 2: example of the custom barcode required on each system unit and enclosure. The first part before the dash ('-') is the CERN contract identifier and the second part is the vendor serial number. The same information must be burned into the FRU of the BMC of the system.  
Experience with procuring, deploying and maintaining hardware at remote co-location centre

June 2014

·

116 Reads

·

1 Citation

Journal of Physics Conference Series

In May 2012 CERN signed a contract with the Wigner Data Centre in Budapest for an extension to CERN's central computing facility beyond its current boundaries set by electrical power and cooling available for computing. The centre is operated as a remote co-location site providing rack-space, electrical power and cooling for server, storage and networking equipment acquired by CERN. The contract includes a 'remote-hands' services for physical handling of hardware (rack mounting, cabling, pushing power buttons, ...) and maintenance repairs (swapping disks, memory modules, ...). However, only CERN personnel have network and console access to the equipment for system administration. This report gives an insight to adaptations of hardware architecture, procurement and delivery procedures undertaken enabling remote physical handling of the hardware. We will also describe tools and procedures developed for automating the registration, burn-in testing, acceptance and maintenance of the equipment as well as an independent but important change to the IT assets management (ITAM) developed in parallel as part of the CERN IT Agile Infrastructure project. Finally, we will report on experience from the first large delivery of 400 servers and 80 SAS JBOD expansion units (24 drive bays) to Wigner in March 2013. Changes were made to the abstract file on 13/06/2014 to correct errors, the pdf file was unchanged.

Citations (3)


... Facebook also worked on cloud hardware platform by introducing an open source project called Open Compute Project (OCP) [7]. The goal of this project is to allow cloud services to choose most suitable hardware (server, storage, network) design for cloud data center [8]. Microsoft innovated the Olympus hardware project that is a ''next generation hyperscale cloud hardware design and a new model for open source hardware development with the (OCP) community'' [9]. ...

Reference:

Dynamic K-Means Clustering of Workload and Cloud Resource Configuration for Cloud Elastic Model
Experience of public procurement of Open Compute servers

Journal of Physics Conference Series

... For example, ServiceNow pulls core data from the CERN Foundation database; computer configuration information from LanDB, INFOR, PuppetDB, the IT CMDB and LayoutDB; pushes service catalogue information to several databases; and synchronises tickets with other systems such as GGUS, INFOR, JMT, JIRA and PLAN. A particularly interesting integration is the one with the monitoring system of the CERN Data Centre [8,9]. ...

Migration of the CERN IT Data Centre Support System to ServiceNow

Journal of Physics Conference Series

... CERN IT department had been co-locating part of its compute and storage capacity at the Wigner Data Centre (WDC) in Budapest since 2013 [5]. By the end of 2018 we had about 1000 2U chassis with 4 servers each (2U4N) and 900 disk arrays (JBODs) deployed at WDC. ...

Experience with procuring, deploying and maintaining hardware at remote co-location centre

Journal of Physics Conference Series