- Work Packages
The goal of this work package is to support the operativity of the SoBigData++ e-infrastructure, integrating new and existing services and quantitatively and qualitatively enhancing them. Activities comprise (I) an update on the best practices / policies for the harmonization of federated resources available at local infrastructure sites; (II) adaptation of existing and new resources to the identified best practices; and (III) maintenance of the SoBigData++ e-infrastructure, creation and update of the Virtual Research Environments in order to support all the activities performed in other WPs.
T9.1 E-infrastructure core services and components
Task leader: CNR
This task integrates and enhances the core services and components required to maintain the SoBigData++ einfrastructure. This infrastructure will be built by exploiting the web- accessible virtual machines operated and provisioned by D4Science.org together with services for their management and administration. On those machines, data, tools and services will be deployed and made available to the research communities of the project for access and use, via an authentication and authorization mechanism (also provisioned by this task) compliant with the EOSC identity federation. The initial set of resources will be hosted at the CNR-ISTI premises, which will offer its state-ofthe- art facilities. This infrastructure is then federated with external infrastructures providing communities’ tools, data, and services. This task includes the enhancements of the set of core services and tools developed in SoBigData required to fulfil the new requirements introduced with the federation of additional infrastructures. In particular: (i) the VRE Manager service will be enhanced to deal with resources provided with different policies; (ii) Monitoring, Alerting, and Accounting services will be extended and rethought to include all federated resources while guaranteeing the required Quality of Service (QoS); (iii) the Catalogue service will be extended to deal with resources representing workflows and notebooks as those that will be delivered in T9.3.
The above core services and the tools required for their usage will allow to deliver an infrastructure that will realise an effective and efficient system-of-systems enabling the following properties:
- Operational Independence: each federated system can be operated independently.
- Managerial Independence: each federated system maintains an operational existence independent of the system-ofsystems
- Evolutionary Development: the development is evolutionary with methods and resources added, removed, and modified with experience
- Emergent Behaviour: the infrastructure performs functions that do not reside in any component system
Easy-to-use operational manuals for facilitating the platform exploitation in all the aspects will be made accessible through a specialised operation portal dedicated to developers, ICT managers, and service providers. Its design and functionalities will be shared– as jointly agreed with WP4 – with the aim to keep it coherent with the training materials produced by the project. In particular, researchers will be supported in all their activities ranging from the publication of their research artifacts, as datasets and methods, to the exploitation of those artifacts through the SoBigData Gateway and the secure exploitation of SoBigData APIs.
T9.2 Social Mining computational engine
Task leader: CNR
This task delivers and operates the SoBigData++ Social Mining computational engine. This engine is realized by exploiting and enhancing the gCube DataMiner engine operated by D4Science.org in order to federate and integrate existing software frameworks provided within the SoBigData community. This will lead to the creation of a platformwhere interdisciplinary tools, methods, and services can be contributed by WP8 and WP10 members, then shared according to tailored policies, and finally easily combined. This new capability will enable truly open science by realizing an integrated platform where executions can be repeated, compared, discussed and logged. All the integrated methods will have access to common cloud storage. This task will also deliver support and tutoring to WP8 members to facilitate the integration of services provided by the partners.
T9.3 Online coding and workflow design
Task leader: EGI
This task deploys and operates the SoBigData++ environment for interactive computations using JupyterHub and JupyterLab technologies. This online coding system enables users to create live documents (notebooks) with code, text and visualizations that capture the whole research process: developing, documenting, and executing code, as well as communicating the results. This environment will be enhanced to integrate with the SoBigData++ e-Infrastructure to allow users to easily share notebooks and leverage the existing capabilities interactively. The task also deploys and operates a workflow design system based on the Galaxy scientific workflow system. Galaxy is focused on providing an open, web-based platform for performing accessible, reproducible, and transparent science. This tool will allow users to combine seamlessly SoBigData++ Social Mining Processes capabilities as workflows building blocks. Galaxy deployment will be extended as needed to enhance sharing of workflows among the SoBigData++ users. This task includes access to the EGI Federated Cloud infrastructure offering flexible and bookable resources in support of the Jupyter and Galaxy platforms with guaranteed service levels facilitated through allocated (sub)contracting. In addition, EGI.eu and D4Science.org will provide access to additional resources that are freely allocated or opportunistic access with best-effort service levels to complement the computing capabilities of the platforms. EGI.eu has a secured budget to use the service of an ICT provider, focusing on the IaaS cloud services and has already identified potential providers willing to support this activity.
T9.4 Online science monitoring dashboard
Task leader: OpenAIRE
This task deploys, configures, and provides the technical support required to operate the OpenAIRE Research Community Dashboard (RCD) for the SoBigData research infrastructure. The RCD enables searching, monitoring, and statistical reporting of digital scientific products (publications, datasets, methods, experiments) related to SoBigData++ Exploratories’ topics or generated by scientists via the SoBigData++ VREs. Through the RCD SoBigData++ ‘animators’ (and researchers) will be able to associate their scientific products to SoBigData++ projects and infrastructure with reporting to the Commission purposes. The RCD will be configured by SoBigData++ administrators to set monitoring criteria to be adopted to track the overall research impact of the SoBigData++ infrastructure services. Scientists generating methods or datasets via workflows in SoBigData++ will be offered the possibility to publish such results as digital objects in Zenodo.org so as to ensure FAIR preservation, citation, and DOI minting for such objects. Scientists will be supported in this process by SoBigData VREs, which will transparently make sure the objects are deposited in Zenodo.org with links between them (e.g. workflow linked to methods objects used and to input and output datasets) and to the SoBigData++ project.
T9.5 Supercomputing Network management and access
Task leader: BSC
Participants: CNR, LUH, AALTO, TU Delft, UNIPI, ETHZ, USFD, UT
This task manages the supercomputing network of the consortium in order to enable trans-national and virtual access to the facilities. The objective is to give support in finding the appropriate computational resources according to the availability and the users’ needs and facilitate the procedures. This will be implemented via an online portal accessible through the SoBigData++ e-infrastructure. In particular, the existing network is composed by:
Institution [I] | Links to Supercomputing facilities [SF] | Description [D]
[I] BSC [SF] Internal resources [D] The partner BSC will also provide access to 6 dedicated nodes of Marenostrum3 cluster. Also the consortium partners will be able to ask for further resources at BSC through the PRACE calls. Detailed description on: https://www.bsc.es/
[I] AALTO [SF] IT centre for science (CSC) [D] (1) 1 cluster with 2900 cores, 12 TB memory (1024 GB per job), 420 TB disk space. (2) Grid with 500 CPU cores, memory 2-6 GB per job (grid includes 7 GPU machines equipped with NVIDIA GTX 480).
[I] TU Delft [SF] HPC Centre Stuttgart [D] Detailed description on: https://www.hlrs.de/home/
[I] LUH [SF] Internal resources [D] 10 Nodes, 80 Cores, 10GB/core, Storage: 720 TB (net 261 TB)
[I] LUH [SF] North-German Supercomputing Alliance (HLRN) [D] Detailed description on: https://www.hlrn.de
[I] UNIPI [SF] Internal resources [D] 1 cluster with 40 cores, 60 GB memory and 24 TB disk
[I] ETHZ [SF] Internal resources [D] 1 cluster with 256 cores
[I] USFD [SF] Internal resources [D] Iceberg HPC cluster with 1,544 processor cores (~15 TFLOPS), 4,448 GB main memory and 200 TB disk space, GPUs being added
[I] UT [SF] Internal resources [D] (1) Rocket HPC cluster with 2700 cores and 400 TB of disk space (2) 3 high-end servers each with 20 cores and 512 GB RAM (3) 2 high-end servers with 32 cores and 1 TB RAM
[I] FHR [SF] Internal resources [D] Living Lab Big Data with 40 nodes, 590 cores, 370TB of disk space (43TB SAS Storage), 4,3TB total memory, 2 nodes with 512GB RAM, Infiniband 1:1 full-blocking network.
[I] CNR [SF] Internal resources [D] (1) 16 servers, 64 core with 8-32GB memory and 24TB disk (2) 1 server with 64 cores 2.6GHz, 125GB memory, 15 TB (3) 1 server with 40 cores 2.4 GHz, 251GB memory, 1 GPU Titan xp (4) 11 physical servers with 128 cores, 1.2TB memory, 90+TB HDD storage, 2TB SSD storage, 10gigabit NICs and internet bandwidth. (5) servers with 256 cores, 3TB RAM and 20 TB HD 2 GPUs (Titan XP).
Each institution’s resources will be represented by an Institution Mediator. Once the user will find the right set of resources on the platform portal, the Institution Mediator will help to establish the first connection and facilitate the access, if possible, to the facility. In particular, BSC will offer its platform to access their computational resources as part of the participation in the project.