# Research support

# High Performance Computing

<div id="bkmrk-the-dcsr-team-is-abl">The DCSR team is able to help you regarding high performance computing topics including:</div>- using DCSR clusters when you're not able to perform your research and run computations on your local computer.
- harnessing the DCSR clusters (CPU and GPU)
- scaling your codes to larger clusters like CSCS

<div id="bkmrk-here-are-the-people-">Here are the people involved in HPC topics:</div>- All faculties  
    
    - Cristian Ruiz - HPC programming, code optimisation, scientific software stack
    - Emmanuel Jeanvoine - HPC programming (CPU and GPU), code optimisation, scientific software stack
- Mainly HEC and GSE faculties 
    - Flavio Calvo - HPC/scientific programming, code optimisation, numerical schemes/algorithmic
    - Margot Sirdey - HPC/scientific programming, code optimisation, numerical schemes/algorithmic

## Technical skills

<div id="bkmrk-here-are-some-topics">Here are some topics on which we can help.</div><div id="bkmrk-"></div>### Using the clusters efficiently

<div id="bkmrk-the-cluster-are-shar">The cluster are shared resources. In order to allow all users to get a fair access to the computing resources, the job scheduler has been configured so that excesses are avoided. However this does not prevent you from inefficiently using the resources. Depending on your workload we can help you to minimise the execution time and use of the resources required and therefore to minimise the billing and allowing you to get your results faster. This is usually achieved by tuning the job scripts and the threading parameters of your applications. We can also provide you with some insight regarding the use of the different storage locations on the clusters which can have a large impact on how long your jobs run.</div><div id="bkmrk--0"></div>### Scientific computing &amp; choice of optimised libraries

<div id="bkmrk-if-you-have-to-devel">If you have to develop your own codes, several rules and good practices should be adopted. Furthermore, instead of reinventing the wheel and reprogramming everything from scratch, it's very likely than optimised and maintained libraries exist for many problems you will face. We can help you to choose the most commonly used optimised libraries and possibly to benchmark various libraries operating on the same topics according to your needs.</div><div id="bkmrk--1"></div>### Profiling &amp; code optimisation

<div id="bkmrk-once-you-have-develo">Once you have developed a code, we can help you to profile it in order to identify bottlenecks and to improve some parts of the code where most time is being spent. Optimisation can be achieved in several ways including using a different algorithm, improving memory access, improving storage access, using vectorisation.</div><div id="bkmrk--2"></div>### Parallelisation

<div id="bkmrk-depending-on-your-co">Depending on your code, it could be possible to slightly modify the core computations so that it could be spread over several CPU cores or nodes. In some cases, it could also be possible and very interesting to port some parts of the code to GPU. Even if some languages like C/C++ or Fortran are more friendly to parallelisation, significant gains can also be obtained with Python, R, or even Julia codes.</div><div id="bkmrk--3"></div>### Energy consumption

<div id="bkmrk-energy-consumption-i">Energy consumption is a major concern in our current world. We are currently working on providing you with mechanisms that allow them to correlate your computations with the associated energy consumption. That can help you to choose between several computing strategies, to define trade-off between precision and consumption, or even to put the global benefit of your research respecting to your environmental impact.</div><div id="bkmrk--4"></div><div id="bkmrk--5"></div>## Terms of support

<div id="bkmrk-we-distinguish-two-k">We distinguish two kinds of support:</div>- service mode: you submit a ticket to [helpdesk@unil.ch](mailto:helpdesk@unil.ch?subject=DCSR%20HPC%20support%20request) (don't forget to start the subject with DCSR), we can help for few hours. The service is free.
- project mode: you have a more complex project that requires several days/weeks/months of work. The service is billed (see U1 costs in support column in the [cost model](https://unil.ch/ci/home/menuinst/calcul--soutien-recherche/couts-operationnels.html))

<div id="bkmrk--6"></div>## Contact

Please send an email to [helpdesk@unil.ch](mailto:helpdesk@unil.ch?subject=DCSR%20HPC%20support%20request) and put "DCSR HPC support request" in the subject.

# BioImage Analysis

<div id="bkmrk-the-dcsr-team-is-abl">The DCSR team can help you with your image analysis pipeline. The person involved in this is **Antony Carrard**, *Image Analysis and Machine Learning specialist.*</div>## **Support Overview**

From a quick question or a quick opinion, to a long project conceived and structured together. Everything related to extracting information and data from your images can be discussed together.

Here are some of the main topics that we are usually asked about:

<div id="bkmrk--0"></div>- Image pre-processing and enhancement
- Object detection and/or image *segmentation*
- Object *tracking*
- Quantifications (shape, dynamics, colocalization, and other properties)
- Clustering / classification of objects
- Visualization (rendering high-dimensional images, 3D images, etc.)
- Analytics (statistics of the extracted information).

<div id="bkmrk--9"></div><div id="bkmrk-usually-most-of-the-">Most of the analyzes carried out from us are programmed in **Python** as it is the most widespread programming language for the subject and is very fast and adaptive. (Please, if you're already familiar with Python, visit this very useful page: **https://bioimagebook.github.io/index.html**)</div><div id="bkmrk-but...">But...</div>### Software

<div id="bkmrk-energy-consumption-i">If you are not interested or do not feel ready to experiment with code, we can use together some easy and intuitive software.</div><div id="bkmrk-here-something-you-c">Here something you can checkout:</div><div id="bkmrk--10"></div><div id="bkmrk-imagej---https%3A%2F%2Fima">- **ImageJ** and Fiji- [https://imagej.net/imaging/segmentation](https://imagej.net/imaging/segmentation)
- **CellProfile** - [https://cellprofiler.org](https://cellprofiler.org)
- **QuPath** - https://qupath.github.io
- **Ilastik** - [https://www.ilastik.org](https://www.ilastik.org)

</div><div id="bkmrk--11"></div><div id="bkmrk--12"></div>## **Contact &amp; Terms of support**

<div id="bkmrk-we-distinguish-two-k"><span style="color: rgb(35, 111, 161);"><span style="color: rgb(0, 0, 0);">**Main contact:** </span></span><research-computing-fbm@unil.ch></div><div id="bkmrk-"></div><div id="bkmrk-we-distinguish-two-k-1">**<span style="color: rgb(0, 0, 0);">Important</span>:** <span style="color: rgb(0, 0, 0);">we distinguish two kinds of support:</span></div>- **service mode**: if you need <span style="text-decoration: underline;">quick help</span> on a certain problem / if you need a suggestion, an information or similar - Submit a ticket to <research-computing-fbm@unil.ch> with subject: **<span style="color: rgb(35, 111, 161);">*Service - \*name of your department\** </span>**<span style="color: rgb(35, 111, 161);"><span style="color: rgb(0, 0, 0);">.</span></span>
- **project mode**: if you have a more <span style="text-decoration: underline;">complex project</span> that requires several weeks/months of work and you want to collaborate with us - Submit a ticket to <research-computing-fbm@unil.ch> with subject: <span style="color: rgb(35, 111, 161);">***Project - \*name of your department\**** <span style="color: rgb(0, 0, 0);">.</span></span>

<div id="bkmrk-to-stay-tuned-on-pag"><span style="color: rgb(0, 0, 0);">**Stay tuned**:</span> general info, scheduled events and meetings, or also to directly contact us, join **our** **Teams channel**:</div>[General | Image Analysis and ML support at FBM Teams | Microsoft Teams](https://teams.microsoft.com/l/team/19%3Ain9Ysbu54cRiJw1j4T896jUWBkmwpWhRObwWId3L6ks1%40thread.tacv2/conversations?groupId=e4231fa4-dd25-41ab-8d6f-f056515a844d&tenantId=25933cd5-fa42-4290-9edd-84c5831bcdd8)

<div id="bkmrk--1"></div><span style="color: rgb(35, 111, 161);"><span style="color: rgb(0, 0, 0);">**Other contact:** <a aria-label="Link helpdesk@unil" class="fui-Link ___m14voj0 f3rmtva f1ern45e f1deefiw f1n71otn f1q5o8ev f1h8hb77 f1vxd6vx f1ewtqcl fyind8e f1k6fduh f1w7gpdv fk6fouc fjoy568 figsok6 f1hu3pq6 f11qmguv f19f4twv f1tyq0we f1g0x7ka fhxju0i f1qch9an f1cnd47f fqv5qza f1vmzxwi f1o700av f13mvf36 f9n3di6 f1ids18y fygtlnl f1deo86v f12x56k7 f1iescvh ftqa4ok f50u1b5 fs3pq8b f1hghxdh f1tymzes f1x7u7e9 f1cmlufx f10aw75t fsle3fq" rel="noopener noreferrer" tabindex="-1" target="_blank" title="mailto:helpdesk@unil">helpdesk@unil.ch</a> with subject <span style="color: rgb(35, 111, 161);">*DCSR Image* </span></span></span><span style="color: #236fa1;"><span style="caret-color: rgb(35, 111, 161);">*Analysis* </span></span>

# Best Practices for Software Development

In progress

# Database support for humanities

<span class="md-plain" style="box-sizing: border-box;">The DCSR provides support to researchers in the humanities for projects based on </span><span class="md-pair-s" style="box-sizing: border-box;">**<span class="md-plain" style="box-sizing: border-box;">structured </span><span class="md-pair-s " style="box-sizing: border-box;">*<span class="md-plain" style="box-sizing: border-box;">corpora</span>*</span><span class="md-plain" style="box-sizing: border-box;"> (databases, digital libraries or collections, etc.)</span>**</span><span class="md-plain" style="box-sizing: border-box;">.</span>

<span class="md-plain" style="box-sizing: border-box;">The DCSR carries out a technological watch to provide researchers and research groups in the humanities with tools likely to cover some of the digital infrastructure needs encountered in the humanities. In the form of shared services, these tools enable researchers to organise, exploit and expose research databases online thanks to configurable presentation interfaces.</span>

<span class="md-plain" style="box-sizing: border-box;">The DCSR relies on tools used and supported by a strong community (individuals and institutions). These tools are made available to researchers as </span><span class="md-pair-s " style="box-sizing: border-box;">**<span class="md-plain" style="box-sizing: border-box;">shared services</span>**</span><span class="md-plain" style="box-sizing: border-box;"> in order for the DCSR to provide a lasting and sustainable maintenance for the database and their presentation interfaces.</span>

### <span class="md-plain" style="box-sizing: border-box;">Database support for humanities</span>

<span class="md-plain" style="box-sizing: border-box;">The DCSR offers support for each stage of research within the framework of the tools made available :</span>

- <span class="md-pair-s " style="box-sizing: border-box;">**<span class="md-plain" style="box-sizing: border-box;">planning</span>**</span><span class="md-plain" style="box-sizing: border-box;">: help with the design and management of digital projects based on structured data (needs assessment, choice of tools and technologies, workflow, data management plan (DMP), etc.);</span>
- <span class="md-pair-s " style="box-sizing: border-box;">**<span class="md-plain" style="box-sizing: border-box;">documentation</span>**</span><span class="md-plain" style="box-sizing: border-box;">: help with the choice of relevant standards (ontologies, vocabularies or authoritative information, technologies) in an open science perspective (technical and semantic interoperability of data) to promote the reuse of data. </span>
- <span class="md-pair-s " style="box-sizing: border-box;">**<span class="md-plain" style="box-sizing: border-box;">modelling</span>**</span><span class="md-plain" style="box-sizing: border-box;">: to enable data to be used in accordance with researchers' expectations within a database, the DCSR provides guidance with data structuring (conceptual modelling);</span>
- <span class="md-pair-s " style="box-sizing: border-box;">**<span class="md-plain" style="box-sizing: border-box;">collection</span>**</span><span class="md-plain" style="box-sizing: border-box;">: assistance in setting up a workflow in accordance with good practices, with the aim of importing data in batches into a database (file naming policy, organisation of metadata, etc.);</span>
- <span class="md-pair-s " style="box-sizing: border-box;">**<span class="md-plain" style="box-sizing: border-box;">storage</span>**</span><span class="md-plain" style="box-sizing: border-box;">: the DCSR is in charge of storage and data backup procedures and handles the maintenance of the infrastructure within the framework of the shared tools provided;</span>
- <span class="md-plain" style="box-sizing: border-box;">analysis: -</span>
- <span class="md-pair-s " style="box-sizing: border-box;">**<span class="md-plain" style="box-sizing: border-box;">exposure</span>**</span><span class="md-plain" style="box-sizing: border-box;">: depending on the needs of the project, and within the limits of the possibilities of the tools deployed, the DCSR can configure specific presentation interfaces for databases;</span>
- <span class="md-pair-s " style="box-sizing: border-box;">**<span class="md-plain" style="box-sizing: border-box;">preservation</span>**</span><span class="md-plain" style="box-sizing: border-box;">: the DCSR can coordinate the transfer of research data into a FAIR data repository in order to insure their long term availability when a database is no longer maintained;</span>
- <span class="md-plain" style="box-sizing: border-box;">reuse: -</span>

### <span class="md-plain" style="box-sizing: border-box;">Terms of service</span>

<span class="md-plain" style="box-sizing: border-box;">TBD</span>

### <span class="md-plain" style="box-sizing: border-box;">Contact</span>

<span class="md-plain" style="box-sizing: border-box;">Researchers are encouraged to contact the DCSR well in advance (2 months) if the intended support is to help submit a project to a funding agency. Researchers should fill this </span><span class="md-pair-s " style="box-sizing: border-box;">**<span class="md-meta-i-c  md-link" style="box-sizing: border-box;">[<span class="md-plain" style="box-sizing: border-box;">form</span>](https://ls.dcsr.unil.ch/index.php?r=survey/index&sid=671546&lang=fr) </span>**</span><span class="md-plain" style="box-sizing: border-box;">in order to provide the necessary details regarding the project.</span>

<span class="md-plain" style="box-sizing: border-box;">Alternatively, for a one-time request, researchers can also contact directly the following people:</span>

- <span class="md-meta-i-c  md-link" style="box-sizing: border-box;">[<span class="md-plain" style="box-sizing: border-box;">Marion Rivoal</span>](mailto:marion.rivoal@unil.ch)</span><span class="md-plain" style="box-sizing: border-box;"> -- coordination with researchers, needs assessment, data modelling, good pratices in digital project management</span>
- <span class="md-meta-i-c  md-link" style="box-sizing: border-box;">[<span class="md-plain" style="box-sizing: border-box;">Loïc Jaouen</span>](mailto:loic.jaouen@unil.ch)</span><span class="md-plain md-expand" style="box-sizing: border-box;"> -- infrastructure, technical assessment and technical support, good pratices in digital project management</span>

# Arches

Arches est une plateforme open source pour la gestion des données issues du patrimoine culturel. La DCSR met à disposition des chercheurs de l'UNIL des instances locales d'Arches.

# Présentation

[Arches](https://www.archesproject.org/) est une plateforme *open source* développée par le [Getty Conservation Institute](https://www.getty.edu/conservation) et le [World Monuments Fund](http://www.wmf.org/) pour la gestion de données issues du domaine du patrimoine culturel.

[![Arch.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/arch.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/arch.png)

Dans le but de **valoriser les données et les bases de données** des équipes de recherche de l'UNIL, la [DCSR](https://www.unil.ch/ci/fr/home/menuinst/calcul--soutien-recherche.html) met à la disposition des chercheurs des instances locales d'Arches adaptées à leurs besoins. La DCSR accompagne le processus de création par les équipes des bases de données exposées par Arches et forme les utilisateurs au maniement de la plateforme.

En plus de pouvoir être couplée à un serveur d'images [IIIF](https://iiif.io/) (protocole standard d'accès aux images à distance), Arches permet l'utilisation d'ontologies (ex. [CIDOC-CRM](https://www.cidoc-crm.org/)) et de référentiels (en [SKOS](http://www.w3.org/2004/02/skos/core.html)) standard sans l'imposer, laissant à la discrétion des équipes de recherche la manière de structurer leurs données. Cette plateforme intègre également des données géoréférencées et peut, le cas échéant, être associée à un SIG.

# Mode d'emploi général de l'utilisateur

## 1. <span class="md-plain">Connexion</span>

<span class="md-plain">Cliquez sur “Sign in” et identifiez-vous </span><span class="md-plain">avec l’adresse mail UNIL et le mot de passe fourni par la DCSR.</span>



## 2. <span class="md-plain">Le modèle de données dans Arches</span>

<span class="md-plain">Le modèle de données s’organise entre des types de ressources (= classes) et des propriétés. Un type de ressource (appelé “*Resource Model*”) se voit associé une ou plusieurs propriétés (appelées “*nodes*”) et chaque propriété correspond à un format de données défini, appelé *“datatype”:* texte, date, lien vers un autre type de ressource, nombre, etc.</span>

## 3. Présentation de l'interface graphique

### <span class="md-plain">Saisie des données</span>

#### <span class="md-plain">Création d’une ressource</span>

<span class="md-plain md-expand">Une fois authentifié, cliquez sur “*Manage*”, en haut à droite. On accède ensuite à l’écran du gestionnaire de ressources “*Resource Manager*”:</span>

[![cap1.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap1.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap1.png)

<span class="md-plain">Sélectionnez le type de ressource (ou classe) que vous souhaitez créer. Cliquez sur “*Create Resource*”.</span>

<span class="md-plain">Pour créer une ressource, il faut créer toutes les propriétés (“</span><span class="md-pair-s ">*<span class="md-plain">nodes</span>*</span><span class="md-plain">”) pertinentes pour une ressource donnée. Ces propriétés sont visibles dans le volet latéral gauche. </span>

[![cap2.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap2.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap2.png)

<span class="md-plain">Lorsqu’on clique sur l’une de ces propriétés, les éléments qui doivent être renseignés s’affichent dans la partie centrale.</span>

[![cap3.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap3.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap3.png)

<span class="md-plain">Les éléments obligatoires sont signalés par un </span><span class="md-pair-s ">**<span class="md-plain">astérisque</span>**</span><span class="md-plain">. Les éléments surlignés en vert serviront pour identifier une ressource dans les résultats d’une recherche, par exemple; ils servent de titre et de description de la ressource.</span>

<span class="md-plain">Dans la partie centrale sont affichés dans indications concernant le format ou la nature des données attendues pour une propriété donnée.</span>

<span class="md-plain">Après avoir entré une propriété ou les différents éléments qui composent une propriété, on peut valider la saisie (“*+ Add*”) ou l’annuler (“*Cancel edit*”).</span>

[![cap5.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/D8Qcap5.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/D8Qcap5.png)

<span class="md-plain">Si une valeur a été entrée pour une propriété sans que la saisie ait été validée, la propriété sera surlignée en </span><span class="md-pair-s ">**<span class="md-plain">jaune</span>**</span><span class="md-plain">. On peut alors revenir sur la propriété en question et valider la saisie (“*+ Add*”).</span>

[![cap18.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/Aikcap18.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/Aikcap18.png)

<span class="md-plain">Les propriétés affichées en </span><span class="md-pair-s ">**<span class="md-plain">vert</span>**</span><span class="md-plain"> dans le volet latéral correspondent aux propriétés qui serviront de titre et de description pour la ressource. Ces propriétés qui permettront d’identifier une ressource parmi d’autres résultats de recherche (voir l’exemple ci-dessous).</span>

[![cap4.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap4.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap4.png)

#### <span class="md-plain">Les formats de données</span>

<span class="md-plain">Les propriétés d’une ressource correspondent à différents formats de données.</span>

##### <span class="md-plain">• </span><span class="md-pair-s ">**<span class="md-plain">texte brut</span>**</span>

##### <span class="md-plain">• </span><span class="md-pair-s ">**<span class="md-plain">texte avec mise en forme</span>**</span>

[![cap6.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap6.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap6.png)

##### <span class="md-plain">• </span><span class="md-pair-s ">**<span class="md-plain">téléchargement (</span>*<span class="md-plain">upload</span>*<span class="md-plain">) d’un document et/ou d’une image</span>**</span><span class="md-plain">:</span>

<span class="md-plain">C</span><span class="md-plain">liquer-déposer le document ou sélectionner le fichier à télécharger (upload) dans une arborescence de fichiers. Suivre les indications fournies par Arches.</span>

[![cap9.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap9.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap9.png)

##### <span class="md-plain">• </span>**<span class="md-pair-s "><span class="md-plain">date</span></span>**

<span class="md-plain">Les dates sont prises en charge avec différents niveaux de précision:</span>

- <span class="md-plain">date précise</span>

[![cap7.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap7.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap7.png)

- <span class="md-plain">date avec une précision affichée de l'ordre du mois ou de l'année</span>

[![cap8.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap8.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap8.png)

<span class="md-plain">Dans ces deux cas, on peut utiliser le calendrier pour choisir la date exacte ou entrer la date ou une partie de la date directement dans le champ, ce qui peut faire gagner du temps.</span>

- <span class="md-plain">dates avec une imprécision hétérogène (</span><span class="md-meta-i-c  md-link">[<span class="md-plain">Extended Date/Time Format</span>](https://www.loc.gov/standards/datetime/edtf.html)</span><span class="md-plain">):</span>
    
    
    - <span class="md-plain">indications de fourchettes chronologiques lorsque la date exacte n’est pas connue: 1960/1971, 1971-06/1971-11, etc.</span>
    - <span class="md-plain">les éléments de la date qui ne sont pas connus peuvent être remplacés par un </span><span class="md-pair-s" spellcheck="false">`u`</span><span class="md-plain"> : 20uu-06-17, uuuu-12-11, 1978-uu-15. </span>

##### <span class="md-plain">• **vocabulaires contrôlés**</span>

<span class="md-plain">Plusieurs types de </span><span class="md-pair-s ">**<span class="md-plain">vocabulaires contrôlés</span>**</span><span class="md-plain"> (= liste de valeurs, thésaurus) sont pris en charge:</span>

- <span class="md-plain">vocabulaire contrôlé plat</span>
- <span class="md-plain">vocabulaire contrôlé hiérarchique</span>
- <span class="md-plain">thésaurus (SKOS, avec équivalence de terme, multilinguisme, etc.)</span>

<span class="md-plain">Selon les besoins du projet (vocabulaire plat ou hiérarchique, nécessité ou non de faire évoluer le vocabulaire (plat uniquement) en cours de projet), il est possible de choisir l’une ou l’autre manière de gérer ces vocabulaires. Ceux-ci peuvent être affichés sous la forme de cases à cocher ou de listes déroulantes.</span>

<span class="md-plain">Dans le cas de listes déroulantes, l’utilisateur peut faire dérouler l’ensemble de la liste ou entrer les premières lettres du terme recherché.</span>

[![cap10.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap10.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap10.png)

[![cap11.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap11.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap11.png)

##### <span class="md-plain">• </span><span class="md-pair-s ">**<span class="md-plain">URL</span>**</span>

<span class="md-plain">L’utilisateur peut fournir uniquement l’URL ou associer également un texte auquel sera associé le lien hypertexte:</span>

[![cap12.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap12.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap12.png)

<span class="md-plain">ou bien :</span>

[![cap13.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap13.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap13.png)

##### <span class="md-plain">• </span><span class="md-pair-s ">**<span class="md-plain">lien vers une autre ressource</span>**</span>

<span class="md-plain">Lien vers un autre type (classe) de ressource: en fonction du modèle de données défini pour un projet, il est possible de créer un lien entre deux types de ressource (par exemple un lien entre un “Livre” et un “Auteur”).</span>

[![cap14.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/VCocap14.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/VCocap14.png)

<span class="md-plain">Si la ressource vers laquelle on veut créer un lien existe déjà, on entre quelques lettres correspondant au titre de la ressource ou à l’une de ses propriétés dans le champ correspondant. Arches affiche alors les résultats correspondants.</span>

[![cap15.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/mQ6cap15.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/mQ6cap15.png)

<span class="md-plain">Si la ressource n’existe pas, on peut la créer depuis “*Create a new ...*”. Une fois entrée les différents propriétés pour cette ressource, on clique sur “*Return*”.</span>

[![cap16.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap16.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap16.png)

[![cap17.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap17.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap17.png)

<span class="md-plain">Le lien est alors créé entre les deux ressources, il ne reste qu’à valider sa création en cliquant sur “*+ | Add*”.</span>

##### <span class="md-plain">• géolocalisation</span>

[![cap22.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap22.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap22.png)

<span class="md-plain">On peut localiser un élément sur un fond de carte. L’emprise de la carte est normalement définie par défaut pour le projet, tout comme le fond de carte (image satellite ou carte). </span>

<span class="md-plain">Selon la manière dont est paramétrer cette propriété, il est possible de placer sur la carte un point, une ligne ou un polygone via le menu “</span><span class="md-pair-s ">*<span class="md-plain">Add new feature</span>*</span><span class="md-plain">”.</span>

<span class="md-plain">L’utilisateur peut modifier le fond de carte à partir du menu latéral, onglet “*Basemap*” et sélectionner celui qui lui convient.</span>

<span class="md-plain">À partir du même menu, l’onglet “*Overlays*” permet à l’utilisateur d’afficher les éléments déjà localisés pour les différentes types de ressources: on peut afficher/cacher ces éléments et régler leur niveau de transparence lorsqu’ils sont affichés (icône placée à droite du nom de la propriété).</span>

[![cap23.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/BUTcap23.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/BUTcap23.png)

[![cap24.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/llWcap24.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/llWcap24.png)

#### <span class="md-plain">Édition d’une ressource</span>

<span class="md-plain md-expand">Depuis la page de la ressource, en haut à droite, sélectionner l’icône “crayon” (=“*Edit Resource*”). </span>

[![edit1a.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/edit1a.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/edit1a.png)

<span class="md-plain md-expand">Et puis pour éditer chaque propriété, choix entre l’icône “crayon” (=édition) ou “poubelle” (= suppression).</span>

[![edit1b.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/edit1b.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/edit1b.png)

<span class="md-plain md-expand">Si aucune valeur n'a été entrée pour une propriété et que vous souhaitez en ajouter une, il faut sélectionner directement la propriété dans la partie latérale gauche.</span>

[![edit1b_copie.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/edit1b-copie.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/edit1b-copie.png)

<span class="md-plain">Après avoir entré une nouvelle valeur dans le champ de la propriété, l’utilisateur a le choix entre “*Delete this record*”, “*x Cancel edit*” et “*+ Save edit*”.</span>

#### <span class="md-plain">Suppression d’une ressource</span>

<span class="md-plain">Depuis la page de la ressource, en haut à gauche menu “*Manage ···*” &gt; “*Delete Resource*”.</span>

[![cap19.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap19.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap19.png)

### <span class="md-plain">Modification des permissions associées à une ressource</span>

<span class="md-plain">Pour </span><span class="md-pair-s ">**<span class="md-plain">chaque ressource</span>**</span><span class="md-plain">, il est possible de paramétrer des permissions spécifiques: “*Manage permissions*” en bas de l’onglet latéral, quand on est en cours d’édition de ressource, quand l’utilisateur possède le statut “*Superuser*”.</span>

[![cap39.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap39.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap39.png)

<span class="md-plain">Si l’utilisateur veut changer les permissions attribuées par défaut à la ressource, dès qu’il clique sur “*Manage permissions*”, il change les permissions courantes et se trouve, par défaut, le seul utilisateur à pouvoir accéder à la ressource en question: il est le seul à disposer des droits *“Read”,* “*Update*”, “*Delete*” sur la ressource, tous les autres utilisateurs individuels et membres de groupes se voient attribuer “*No Access*”.</span>

<span class="md-plain">Pour revenir au schéma de permission tel que défini par défaut pour la ressource, il faut cliquer sur “*Allow Normal Access*”. Et si l’utilisateur veut modifier les permissions associées à cette ressource, il doit sélectionner manuellement un à un tous les groupes </span><span class="md-pair-s ">**<span class="md-plain">et</span>**</span><span class="md-plain"> tous les utilisateurs individuels auxquels il souhaite donner les droits *“Read”, “Update”* et/ou *“Delete”* et cocher/décocher les droits correspondants.</span>

### <span class="md-plain md-expand">Consultation</span>

<span class="md-plain">Depuis la page d’une ressource, en haut à droite “*Hide Null Values*” permet de ne pas afficher les propriétés non renseignées.</span>

### <span class="md-plain">Recherche</span>

<span class="md-plain">Arches offre 4 types de recherche à partir du menu latéral gauche *“Search”,* représenté par l’icône d’une loupe:</span>

- <span class="md-plain">la recherche fondée sur la géolocalisation des ressources (onglet “*Map filter*”)</span>

[![cap29.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap29.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap29.png)

- <span class="md-plain">la recherche plein texte</span>

[![cap26.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap26.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap26.png)

- <span class="md-plain">la recherche par type de ressource</span>

[![cap27.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap27.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap27.png)

- <span class="md-plain">la recherche avancée (onglet *“Advanced”)*</span>

[![cap28.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap28.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap28.png)

#### <span class="md-plain">Recherche plein texte</span>

<span class="md-plain">Arches permet de visualiser immédiatement les correspondances avec le terme recherché. il est aussi possible de distinguer entre le terme recherché utilisé dans les champs textuels de la base de données (“*Term Matches*”) et le terme s’il fait partie d‘un vocabulaire contrôlé *(“Concept”).*</span>

[![cap35.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap35.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap35.png)

<span class="md-plain">Le caractère </span><span class="md-pair-s" spellcheck="false">`*`</span><span class="md-plain"> permet de remplacer un ou plusieurs caractères:</span>

[![cap36.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap36.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap36.png)

[![cap37.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap37.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap37.png)

<span class="md-plain">Mais placé en début de mot, il ne donne pas de résultat (ici, Rimbaud n'apparaît pas parmi les résultats de recherche):</span>

[![cap38.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap38.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap38.png)

#### <span class="md-plain">Recherche par type de ressource </span><span class="md-pair-s"><span class="md-plain">(en construction)</span></span>

#### <span class="md-plain">Recherche par date </span><span class="md-pair-s"><span class="md-plain">(en construction)</span></span>

#### <span class="md-plain">Recherche sur la géolocalisation </span><span class="md-pair-s"><span class="md-plain">(en construction)</span></span>

#### <span class="md-plain">Recherche avancée </span>

<span class="md-plain">Cette recherche permet de combiner différents critères (ou facettes) et différents opérateurs (“et” et “ou”).</span>

<span class="md-plain">La partie droite sous “</span><span class="md-pair-s ">*<span class="md-plain">Search Facets</span>*</span><span class="md-plain">” liste les types de ressources ainsi que leurs propriétés qui peuvent être utilisés pour la recherche. Dans le champ *“Find...”* sous “*Search Facets*”, l’utilisateur peut trier les types de ressource ou les propriétés à afficher.</span>

<span class="md-plain">Une fois sélectionné un type de ressource, l’utilisateur peut ensuite sélectionner la ou les propriétés pertinentes pour construire sa recherche. Pour combiner les différents critères, une fois la première propriété choisie, il suffit d’en sélectionner une autre, qui s’ajoutera sous la première. L’utilisateur doit ensuite définir l’opérateur pertinent *(“And”* ou *“Or”).*</span>

[![cap25.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap25.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap25.png)

<span class="md-plain">En fonction du format de données associé à la propriété, </span>

<span class="md-plain">Les paramètres de recherche proposés pour chaque propriété dépendent du format de données associé:</span>

- <span class="md-plain">pour du texte:</span>

[![cap30.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap30.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap30.png)

- <span class="md-plain">pour un nombre:</span>

[![cap31.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap31.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap31.png)

- <span class="md-plain">pour un booléen (vrai/faux, oui/non):</span>

[![cap32.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap32.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap32.png)

- <span class="md-plain">pour un lien entre deux types de ressources:</span>

[![cap33.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap33.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap33.png)

- <span class="md-plain">pour un vocabulaire contrôlé:</span>

[![cap34.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/scaled-1680-/cap34.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-01/cap34.png)

- <span class="md-plain">pour des données géolocalisées:</span>

<span class="md-plain">Le format de données “géolocalisation” n’est pas pris en charge dans la recherche avancée; l’onglet “*Map filter*” permet de filtrer les résultats de recherche en fonction de la localisation des ressources ou de leurs propriétés.</span>

# Import des images (Val d'Hérens/Smapshot)

***Cette section ne concerne que le projet Val d'Hérens 1950/2050.***

### Préambule

La plateforme Arches permet de télécharger une image (*upload*) de [manière autonome](https://wiki.unil.ch/ci/books/research-support/page/mode-demploi-general-de-lutilisateur#bkmrk-%E2%80%A2-t%C3%A9l%C3%A9chargement-%28up).

Cependant, la gestion des images doit suivre ce protocole pour deux raisons :
1. l'espace de stockage (200 Go) alloué sur la machine virtuelle (VM) sur laquelle est déployé Arches impose de rationaliser la gestion des images, compte tenu des paramètres avec lesquels les documents ont été numérisés ;
2. comme [Smapshot](https://smapshot.heig-vd.ch/) accède aux images servies par Cantaloupe via une URL qui contient le nom du fichier d'origine, les utilisateurs doivent veiller ce que toute nouvelle version d'une image téléchargée manuellement dans Arches (par exemple pour corriger une mauvaise orientation) conserve le nom de l'image d'origine.

# 1. Transformation de l'image avant un premier import manuel

Le téléchargement manuel d'une nouvelle image, pour des données qui n'ont pas encore été transférées vers Smapshot, requiert:

- de convertir l'image dans un **format `.png`** (d'une part parce qu'il s'agit d'un format sans perte, contrairement au `jpeg`, et, d'autre part parce que ce format est interprétable par les navigateurs web, contrairement au format `.tif`, ce qui signifie qu'Arches est capable d'afficher l'image) ;
- de limiter la taille du plus grand côté de l'image à **2500 pixels**.

Des outils permettent de transformer des images par lots de manière à ce qu'elles correspondent à ces critères : [ImageMagick](https://imagemagick.org/), des scripts AdobePhotoshop, etc.

Avec ImageMagick, la commande est:
```bash
convert -auto-orient -format png -resize 2500x2500\> original.tif destination.png
```

# 2. Remplacement d'une image par une autre (images déjà intégrées dans Smapshot)

Pour les images déjà intégrées dans Smapshot qui nécessitent un recadrage ou dont l'orientation doit être corrigée, il faut repartir des images stockées dans le `/local/sharedArches` sur l'espace de stockage allouée à la VM d'Arches, puisque:
- toutes ces images ont déjà été converties et redimensionnées
- en partant de ce repertoire on peut conserver le nom des fichiers d'origine.

## 2.1. Avec CyberDuck, sur un Mac

### 2.1.1. Installation de CyberDuck

Installer [CyberDuck](https://cyberduck.io/download/).

### 2.1.2. Paramétrer la connexion

Une fois CyberDuck ouvert, cliquez sur "Ouvrir une connexion" (ou sur "Fichier" > "Ouvrir une connexion").

[![cd1.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd1.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd1.png)

Paramétrer la connexion sur ce modèle (identifiant et mot de passe UNIL):

[![cd3.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd3.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd3.png)

Et cliquez sur "Connecter" puis sur "Autoriser", sur la boîte de dialogue qui s'ouvre.

[![cd9.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd9.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd9.png)

Pour garder les paramètres de cette connection en mémoire, dans le menu principal, cliquez sur "Signet" > "Nouveau signet". Dans la fenêtre qui s'ouvre, vous pouvez donner au signet le surnom "vdherens1950".

[![cd8.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd8.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd8.png)

À la prochaine ouverture de CyberDuck, la connexion au serveur **vdherens1950** vous sera proposée dès la fenêtre d'accueil.

[![cd7.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd7.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd7.png).

### 2.1.3. Atteindre le repertoire d'images partagé `/local/sharedArches`

Vous arrivez sur un répertoire à votre nom dans l'espace de stockage : 

[![cd4.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd4.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd4.png)

Il s'agit maintenant de rejoindre le répertoire `/local/sharedArches` dans lequel se trouvent les images. Dans le menu déroulant, sélectionner `/` :

[![cd13.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd13.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd13.png)

Puis parmi les répertoires qui s'affichent, choisissez "local" > "sharedArches" :

[![cd14.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd14.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd14.png)

Vous voici dans le repertoire d'images partagé.

[![cd15.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd15.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd15.png)

### 2.1.4. Parcourir les images avec le visualisateur interne de CyberDuck

Il est possible de consulter les images directement dans le répertoire `/local/sharedArches` avec le **visualisateur interne de CyberDuck**.

Avec un clic droit sur le nom de fichier, choisissez "Coup d'œil" dans le menu contextuel. Il faut à CyberDuck quelques secondes pour charger ces aperçus et les afficher. 

Il est aussi possible de sélectionner plusieurs images (maintenir la touche majuscule/shift enfondée puis clic droit) et de visualiser l'ensemble des images sélectionnées.

[![ cd10.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd10.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd10.png)

### 2.1.5. Éditer les images

L'édition peut se faire avec différentes applications, en fonction des options disponibles sous "Editer avec" dans le menu contextuel.

[![cd16.png](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/cd16.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/cd16.png)

Nous avons testé l'édition d'images avec 2 applications : 
- avec **Aperçu**, il n'est pas possible d'enregistrer les modifications apportées directement sur le serveur. C'est une copie de l'image qu'Aperçu propose d'enregistrer, en local.
- **Adobe Photoshop** permet d'enregistrer directement ces modifications, avec un simple `commande + S`.

## 2.2. Sous Linux

### 2.2.1. Paramétrer la connexion

Pour une distribution [Ubuntu](https://ubuntu.com/), à partir du navigateur de fichier, on peut sélectionner  "Autres emplacements":

[![](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/image-1675431951375.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/image-1675431951375.png)

pour entrer l'adresse du serveur: `sftp://dcsrs-vherens50.dcsr.unil.ch/` puis cliquer sur "se connecter":

[![](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/image-1675432177025.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/image-1675432177025.png)

Pour les connections suivantes, l'adesse du serveur devrait rester dans la liste des connections recentes.

Il faut se connecter avec l'identifiant Unil sans l'addresse mail, soit pour moi, `ljaouen`.

### 2.2.2. Atteindre le repertoire d'images partagé /local/sharedArches

Par défaut, le répertoire de l'utilistateur est ouvert, clicquer sur le nom du serveur permet de remonter à la source pour naviguer dans le répertoire `/local/sharedArches`:

[![](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/image-1675436308234.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/image-1675436308234.png)

### 2.2.3. Parcourir les images avec le visualisateur par défaut

Arrivé dans le répertoire `/local/sharedArches`, la liste des images est visible, cliquer (double) sur une des images l'affiche:
[![](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/image-1675436324830.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/image-1675436324830.png)

La navigation d'une image à l'autre se fait avec les touches du clavier [<-] et [->].

### 2.2.4. Éditer les images

#### 2.2.4.1 Rotation simple

Pour faire pivoter l'image, il faut passer la souris sur l'image ce qui fait apparaître les deux indicateurs de rotation en bas au milieu:

[![](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/image-1675671271449.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/image-1675671271449.png)


En suite, il suffit d'enregistrer ([control]+[s]) ou de passer par le menu:
[![](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/image-1675675582545.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/image-1675675582545.png)

#### 2.2.4.2. Edition fine

Pour une édition plus fine (rognage, rotation, découpage) il est possible d'ouvrir l'image avec un autre éditeur (comme [gimp](https://www.gimp.org/), à installer), l'ouverture se fait par un click droit, la sauvegarde de l'image par l'option _Fichier => Ecraser <nom de l'image>_

[![](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/scaled-1680-/image-1675676945394.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2023-02/image-1675676945394.png)

# Machine Learning

# Scientific Support

<div id="bkmrk-"></div>#### Need help with Machine Learning in your research?

Contact us at <helpdesk@unil.ch> with subject: DCSR ML support

Scientific support for Machine Learning projects, as outlined below, is provided free of charge to all UNIL members.

#### Introduction

Machine Learning provides a powerful framework for predictive modeling in scientific research:

- Infer outcomes from complex datasets using classification and regression models
- Evaluate and improve models based on predictive performance
- Use exploratory techniques to better understand and prepare your data

<div id="bkmrk--5"></div>At DCSR, we support researchers in several key areas of Machine Learning:

#### Training

We help you understand how specific Machine Learning methods work and how to apply them in your research. We also offers short introductory courses on Machine Learning; see [ML courses](https://wiki.unil.ch/ci/books/research-support/page/courses).

#### Methodology

We assist you in selecting and applying appropriate Machine Learning methods for your research.

This may include:

- A pilot phase, where we collaboratively develop and test code on your laptop or UNIL clusters
- A production phase, where we help scale and refine your workflow

More specifically, we can:

- Identify existing tools suited to your analysis
- Help install and run them on your laptop or UNIL clusters
- Explain key parameters and settings
- Help develop custom algorithms and code if no suitable tools exist

#### Infrastructure

We help you efficiently run your Machine Learning workflows on UNIL clusters.

This includes:

- Installing and configuring your code
- Profiling performance to optimize resource usage (RAM, CPUs/GPUs, number of nodes)

#### Collaboration at UNIL

We can connect you with relevant experts at UNIL to discuss specific Machine Learning challenges.

#### Example Use Cases:

1. Experimental scientist  
    Wants to analyze data using Machine Learning on a laptop or UNIL clusters.  
    → We help identify suitable tools, explain how they work, and support their use.
2. Data scientist (setup phase)  
    Wants to implement a Machine Learning pipeline but is unsure how to proceed.  
    → We help select and apply appropriate methods.
3. Data scientist (review phase)  
    Has implemented a pipeline and wants feedback.  
    → We review the methodology and suggest improvements or alternatives.
4. Scaling from laptop to cluster  
    Wants to move a pipeline from a local computer to UNIL clusters.  
    → We assist with deployment, software setup, and performance optimization.

#### Contact

You can reach us at <helpdesk@unil.ch> with subject: DCSR ML support

# Courses

Here are the Machine Learning courses provided by the DCSR:

- [A Gentle Introduction to Decision Trees and Random Forests with Python and R](https://wiki.unil.ch/ci/books/cours-pour-le-personnel-et-les-doctorantes-unil/page/research-a-gentle-introduction-to-decision-trees-and-random-forests-with-python-and-r)
- [A Gentle Introduction to Deep Learning with Python and R](https://wiki.unil.ch/ci/books/cours-pour-le-personnel-et-les-doctorantes-unil/page/research-a-gentle-introduction-to-deep-learning-with-python-and-r)
- [An Introduction to Image Analysis with CNNs in Python](https://wiki.unil.ch/ci/books/cours-pour-le-personnel-et-les-doctorantes-unil/page/research-an-introduction-to-image-analysis-with-cnns-in-python)
- [An Introduction to Text Analysis with Transformers and LLMs in Python](https://wiki.unil.ch/ci/books/cours-pour-le-personnel-et-les-doctorantes-unil/page/research-an-introduction-to-text-analysis-with-transformers-and-llms-in-python)

These courses are free of charge for all UNIL members.

You can find the schedule and registration details here: [https://courses.unil.ch/ci](https://courses.unil.ch/ci)

For more information about these courses, please contact us at <helpdesk@unil.ch> with subject: DCSR ML courses

# DCSR-LLM - Toolkit for Research at UNIL

Large language models are attracting growing interest across research fields, but many academic uses require more than a simple chatbot interface. Researchers often need to compare models, test them on specific tasks, extract structured information from documents, or adapt them to a domain-specific workflow. For these needs, reproducibility, local control, and transparent experimentation matter as much as convenience.

[dcsr-llm](https://git.dcsr.unil.ch/Scientific-Computing/dcsr-llm) was developed with that reality in mind. It is a command-line toolkit designed to support research workflows with large language models in a more controlled and reproducible way. Rather than focusing only on conversational use, it brings together several core functions in a single framework: inspecting models before use, downloading and running them locally, generating predictions, benchmarking results, extracting structured data from text corpora, fine-tuning models, and exporting them for other environments.

[![output.png](https://wiki.unil.ch/ci/uploads/images/gallery/2026-03/scaled-1680-/output.png)](https://wiki.unil.ch/ci/uploads/images/gallery/2026-03/output.png)

For UNIL researchers, the value is practical. The tool is designed to work on local machines as well as on UNIL-supported GPU environments such as the Curnagl and Urblauna clusters. This makes it possible to move beyond isolated prompting and toward more systematic workflows. A team can, for example, inspect whether a model is compatible with its infrastructure, benchmark several models on the same question set, extract targeted variables from a document collection, or fine-tune an instruction model for a specialized task or terminology.

Several use cases are especially relevant in a research context. One is model selection: before downloading large files, researchers can inspect a model and estimate whether it is suitable for their hardware and intended workflow. Another is evaluation: instead of relying on impressions, researchers can benchmark baseline, quantized, or fine-tuned models on the same dataset and compare results consistently. A third is structured extraction: dcsr-llm can transform unstructured text into validated JSON outputs, with evidence tracking and review mechanisms that are useful for corpus-based work. For more advanced projects, the toolkit also supports fine-tuning existing instruct models to better match a domain, style, or task protocol.

A key strength of dcsr-llm is that it treats LLM use as a research workflow rather than a one-off interaction. Configurations, saved artifacts, and explicit processing steps help support reproducibility and make experiments easier to document, rerun, and compare. This is particularly important in academic settings, where results need to be traceable and methods need to remain understandable.

dcsr-llm is best understood as a technical research tool rather than a one-click application. It does not replace critical judgment, and model outputs still need to be checked and validated. But for researchers who want a more rigorous and flexible way to work with LLMs, it offers a strong foundation.

UNIL members who would like to learn more, try the tool, or provide feedback can visit the [dcsr-llm repository](https://git.dcsr.unil.ch/Scientific-Computing/dcsr-llm) or contact us at <helpdesk@unil.ch> with subject: DCSR-LLM.

Repository: [https://git.dcsr.unil.ch/Scientific-Computing/dcsr-llm](https://git.dcsr.unil.ch/Scientific-Computing/dcsr-llm)

# Deep Learning with GPUs

The training phase of your deep learning model may be very time consuming. To accelerate this process you may want to use GPUs and you will need to install the deep learning packages, such as Keras or PyTorch, properly. Here is a short documentation on how to install some well known deep learning packages in Python. If you encounter any problem during the installation or if you need to install other deep learning packages (in Python, R or other programming languages), please send an email to <helpdesk@unil.ch> with subject DCSR: Deep Learning package installation, and we will try to help you.

### TensorFlow and Keras

We will install the TensorFlow 2's implementation of the Keras API (tf.keras); see [https://keras.io/about/](https://keras.io/about/)

To install the packages in your work directory:

```
cd /work/PATH_TO_YOUR_PROJECT
```

Log into a GPU node:

```
Sinteractive -m 4G -G 1
```

Check that the GPU is visible:

```
nvidia-smi
```

If it works properly you should see a message including an NVIDIA table. If you instead receive an error message such as "nvidia-smi: command not found" it means there is a problem.

To use TensorFlow on NVIDIA GPUs we recommend the use of NVIDIA containers including TensorFlow and its dependences such as CUDA and CuDNN that are necessary for GPU acceleration. The NVIDIA containers will also include various Python libraries and Python itself in such a way that everything is compatible with the version of TensorFlow you choose. Nevertheless, if you prefer to use the virtual environment method, please look at the instructions in the comments below.

```
module load singularityce/4.1.0

export SINGULARITY_BINDPATH="/scratch,/dcsrsoft,/users,/work,/reference"
```

We have already downloaded several versions of TensorFlow:

```
/dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-24.05-2.15.sif
/dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-24.01-2.14.sif
/dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-23.10-2.13.sif
/dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-23.07-2.12.sif
/dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-23.03-2.11.sif
/dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-22.12-2.10.sif
```

Here the last two numbers indicate the TensorFlow version, for example "tensorflow-ngc-24.05-2.15.sif" corresponds to TensorFlow version "2.15". In case you want to use another version, see the instructions in the comments below.

To run it:

```
singularity run --nv /dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-24.05-2.15.sif
```

You may receive a few error messages such as “not a valid test operator”, but this is ok and should not cause any problem. You should see a message by NVIDIA including the TensorFlow version. The prompt should now start with "Singularity&gt;" emphasising that you are working within a singularity container.

To check that TensorFlow was properly installed:

```
Singularity> python -c 'import tensorflow; print(tensorflow.__version__)'
```

There might be a few warning messages such as "Unable to register", but this is ok, and the output should be something like "2.15.0".

To confirm that TensorFlow is using the GPU:

```
Singularity> python -c 'import tensorflow as tf; gpus = tf.config.list_physical_devices("GPU"); print("Num GPUs Available: ", len(gpus)); print("GPUs: ", gpus)'
```

You can check the list of python libraries available:

```
Singularity> pip list
```

Notice that on top of TensorFlow several well known libraries, such as "notebook", "numpy", "pandas", "scikit-learn" and "scipy", were installed in the container. The great news here is that NVIDIA made sure that all these libraries were compatible with TensorFlow so there should not be any version incompatibilities.

If necessary you may install extra packages that your deep learning code will use. For that you should create a virtual environment. Here we will call it "venv\_tensorflow\_gpu", but you may choose another name:

```
Singularity> python -m venv --system-site-packages venv_tensorflow_gpu
```

Activate the virtual environment:

```
Singularity> source venv_tensorflow_gpu/bin/activate
```

To install for example "tf\_keras\_vis":

```
(venv_tensorflow_gpu) Singularity> pip install tf_keras_vis
```

Deactivate your virtual environment and logout from singularity and the GPU node:

```
(venv_tensorflow_gpu) Singularity> deactivate
Singularity> exit
exit
```

#### Comments

##### Reproducibility

The container version specifies all Python libraries versions, ensuring consistency across different environments. If you also use a virtual environment and want to make your installation more reproducible, you may proceed as follows:

1\. Create a file called "requirements.txt" and write the package names inside. You may also specify the package versions. For example:

```
tf_keras_vis==0.8.7
```

2\. Proceed as above, but instead of installing the packages individually, type

```
pip install -r requirements.txt
```

##### Build your own container

Go to the webpage: [https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/index.html](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/index.html)

Click on the latest release, which is "TensorFlow Release 24.05" at the time we're writing this documentation, and scroll down to see the table "NVIDIA TensorFlow Container Versions". It will show you the container versions and associated TensorFlow versions. For exemple, if you want to use TensorFlow 2.14 you could select the container 24.01.

Go to the webpage: [https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow/tags](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow/tags)

Select the appropriate container, for 24.01 it is "nvcr.io/nvidia/tensorflow:24.01-tf2-py3". Do not choose any "-igpu" containers because they do not work on the UNIL clusters.

Choose a name for the container, for example "tensorflow-ngc-24.01-tf2.14.sif", and create the following file by using your favorite editor:

```
cd /scratch/username/

vi tensorflow-ngc.def
```

```
Bootstrap: docker
From: nvcr.io/nvidia/tensorflow:24.01-tf2-py3

%post
    apt-get update && apt -y upgrade
    PYTHONVERSION=$(python3 --version|cut -f2 -d\ | cut -f-2 -d.)
    apt-get install -y bash wget gzip locales virutalenv git
    sed -i '/^#.* en_.*.UTF-8 /s/^#//' /etc/locale.gen
    sed -i '/^#.* fr_.*.UTF-8 /s/^#//' /etc/locale.gen
    locale-gen
```

Note that if you choose a difference container version, you will need to replace "24.01" by the appropriate container version in the script.

You can now download the container:

```
module load singularityce/4.1.0

export SINGULARITY_DISABLE_CACHE=1

singularity build --fakeroot tensorflow-ngc-24.01-tf2.14.sif tensorflow-ngc.def

mv tensorflow-ngc-24.01-tf2.14.sif /work/PATH_TO_YOUR_PROJECT
```

That's it. You can then use it as it was explained above.

Warning: Do not log into a GPU node for building a singularity container, it will not work. But of course you will need to log into a GPU node to use it as shown below.

##### Use a virtual environment

Using containers is convenient because it is often difficult to install TensorFlow directly within a virtual environment. The reason is that TensorFlow has several dependencies and we must load or install the correct versions of them. Here are some instructions:

```
cd /work/PATH_TO_YOUR_PROJECT

Sinteractive -m 4G -G 1

module load python/3.10.13 tk/8.6.11 tcl/8.6.12

python -m venv venv_tensorflow_gpu
 
source venv_tensorflow_gpu/bin/activate
 
pip install tensorflow[and-cuda]==2.14.0 "numpy<2"
```

#### Run your deep learning code

To test your deep learning code (maximum 1h), say "my\_deep\_learning\_code.py", you may use the interactive mode:

```
cd /PATH_TO_YOUR_CODE/

Sinteractive -m 4G -G 1

module load singularityce/4.1.0

export SINGULARITY_BINDPATH="/scratch,/dcsrsoft,/users,/work,/reference"

singularity run --nv /dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-24.05-2.15.sif

source /work/PATH_TO_YOUR_PROJECT/venv_tensorflow_gpu/bin/activate
```

Run your code:

```
python my_deep_learning_code.py
```

or copy/paste your code inside a python environment:

```
python

copy/paste your code. For example:

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical

etc
```

Once you have finished testing your code, you must close your interactive session (by typing exit), and then run it on the cluster by using an sbatch script, say "my\_sbatch\_script.sh":

```
#!/bin/bash -l
#SBATCH --account your_account_id
#SBATCH --mail-type ALL
#SBATCH --mail-user firstname.surname@unil.ch

#SBATCH --chdir /scratch/username/
#SBATCH --job-name my_deep_learning_job
#SBATCH --output my_deep_learning_job.out

#SBATCH --partition gpu
#SBATCH --gres gpu:1
#SBATCH --gres-flags enforce-binding
#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 1
#SBATCH --mem 10G
#SBATCH --time 01:00:00

module load singularityce/4.1.0

export SINGULARITY_BINDPATH="/scratch,/dcsrsoft,/users,/work,/reference"

# To use only singularity
export singularity_python="singularity run --nv /dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-24.05-2.15.sif python"

# To use singularity and virtual environment
export singularity_python="singularity run --nv /dcsrsoft/singularity/containers/tensorflow/tensorflow-ngc-24.05-2.15.sif /work/PATH_TO_YOUR_PROJECT/venv_tensorflow_gpu/bin/python"

$singularity_python /PATH_TO_YOUR_CODE/my_deep_learning_code.py
```

To launch your job:

```
cd PATH_TO_YOUR_SBATCH_SCRIPT/

sbatch my_sbatch_script.sh
```

Remember that you should write the output files in your /scratch directory.

#### Multi-GPU parallelism

<div id="bkmrk-if-you-want-to-use-a" style="text-align: justify;"><div data-post-id="49488895"><div data-value="10" style="text-align: justify;">If you want to use a single GPU, you do not need to tell Keras to use the GPU. Indeed, if a GPU is available, Keras will use it automatically.</div><div aria-label="Accepted" data-s-tooltip-placement="right" style="text-align: justify;"><div><svg aria-hidden="true" class="svg-icon iconCheckmarkLg" height="36" viewbox="0 0 36 36" width="36"></svg>  
</div></div></div></div>On the other hand, if you want to use 2 (or more) GPUs (on the same node), you need to use a special TensorFlow function, called "tf.distribute.MirroredStrategy", in your python code "my\_deep\_learning\_code.py": see the Keras documentation [https://keras.io/guides/distributed\_training/](https://keras.io/guides/distributed_training/) If no devices are specified in the constructor argument of the strategy then it will use all the available GPUs. If no GPUs are found, it will use the available CPUs.

This function implements single-machine multi-GPU data parallelism. It works in the following way: divide the batch data into multiple sub-batches, apply a model copy on each sub-batch, where every model copy is executed on a dedicated GPU, and finally concatenate the results (on CPU) into one big batch. For example, if your batch\_size is 64 and you use 2 GPUs, then we will divide the input data into 2 sub-batches of 32 samples, process each sub-batch on one GPU, then return the full batch of 64 processed samples. This induces quasi-linear speedup.

And the sbatch script must contain the line:

```
#SBATCH --gres gpu:2
```

### TensorBoard

To use TensorBoard on Curnagl, you need to modify your code as explained in [https://keras.io/api/callbacks/tensorboard/](https://keras.io/api/callbacks/tensorboard/) .

After your TensorBoard "logs" directory has been created, you need to proceed as follows:<textarea style="display: none;">\[/scratch/pjacquet\] Sinteractive -p interactive -m 4G -G 1</textarea>

```
[/scratch/pjacquet] Sinteractive -m 4G -G 1
```

```
Sinteractive is running with the following options:

--gres=gpu:1 -c 1 --mem 4G -J interactive -p interactive -t 1:00:00 --x11

salloc: Granted job allocation 2466209
salloc: Waiting for resource configuration
salloc: Nodes dnagpu001 are ready for job
```

You need to remember the GPU node's name dnagpuXXX. Here it is dnagpu001.

Then

```
[/scratch/pjacquet] module load singularityce/4.1.0

[/scratch/pjacquet] export SINGULARITY_BINDPATH="/scratch,/dcsrsoft,/users,/work,/reference"

[/scratch/pjacquet] singularity run --nv /dcsrsoft/singularity/containers/tensorflow-ngc-24.05-2.15.sif

Singularity> source /work/PATH_TO_YOUR_PROJECT/venv_tensorflow_gpu/bin/activate

(venv_tensorflow_gpu) Singularity> ls
logs

(venv_tensorflow_gpu) Singularity> tensorboard --logdir=./logs --port=6006
```

You will see the following message:

```
Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all
TensorBoard 2.6.0 at http://localhost:6006/ (Press CTRL+C to quit)
```

On your laptop, you need to type:

```
ssh -J curnagl.dcsr.unil.ch -L 6006:localhost:6006 dnagpuXXX
```

where dnagpuXXX is the GPU node's name you used to launch TensorBoard (above it was dnagpu001).

Finally, on your laptop, you may use any web browser (e.g. Chrome) to open the page [http://localhost:6006](http://localhost:6006 "http://localhost:6006/") (copy/paste this link into your web browser). You should then see TensorBoard with the information located in the "logs" folder.

### PyTorch

To install the packages in your work directory:

```
cd /work/PATH_TO_YOUR_PROJECT
```

Log into a GPU node:

```
Sinteractive -m 4G -G 1
```

Check that the GPU is visible:

```
nvidia-smi
```

If it works properly you should see a message including an NVIDIA table. If you instead receive an error message such as "nvidia-smi: command not found" it means there is a problem.

To use PyTorch on NVIDIA GPUs we recommend the use of NVIDIA containers including PyTorch and its dependences such as CUDA and CuDNN that are necessary for GPU acceleration. The NVIDIA containers will also include various Python libraries and Python itself in such a way that everything is compatible with the version of PyTorch you choose. Nevertheless, if you prefer to use the virtual environment method, please look at the instructions in the comments below.

```
module load singularityce/4.1.0

export SINGULARITY_BINDPATH="/scratch,/dcsrsoft,/users,/work,/reference"
```

We have already downloaded several versions of PyTorch:

```
/dcsrsoft/singularity/containers/pytorch/pytorch-ngc-24.05-2.4.sif
/dcsrsoft/singularity/containers/pytorch/pytorch-ngc-24.04-2.3.sif
/dcsrsoft/singularity/containers/pytorch/pytorch-ngc-24.01-2.2.sif
/dcsrsoft/singularity/containers/pytorch/pytorch-ngc-23.10-2.1.sif
/dcsrsoft/singularity/containers/pytorch/pytorch-ngc-23.05-2.0.sif
```

Here the last two numbers indicate the PyTorch version, for example "pytorch-ngc-24.05-2.4.sif" corresponds to PyTorch version "2.4". In case you want to use another version, see the instructions in the comments below.

To run it:

```
singularity run --nv /dcsrsoft/singularity/containers/pytorch/pytorch-ngc-24.05-2.4.sif
```

You may receive a few error messages such as “not a valid test operator”, but this is ok and should not cause any problem. You should see a message by NVIDIA including the PyTorch version. The prompt should now start with "Singularity&gt;" emphasising that you are working within a singularity container.

To check that PyTorch was properly installed:

```
Singularity> python -c 'import torch; print(torch.__version__)'
```

There might be a few warning messages such as "Unable to register", but this is ok, and the output should be something like "2.4.0".

To confirm that PyTorch is using the GPU:

```
Singularity> python -c 'import torch; cuda_available = torch.cuda.is_available(); num_gpus = torch.cuda.device_count(); gpus = [torch.cuda.get_device_name(i) for i in range(num_gpus)]; print("Num GPUs Available: ", num_gpus); print("GPUs: ", gpus)'
```

You can check the list of python libraries available:

```
Singularity> pip list
```

Notice that on top of PyTorch several well known libraries, such as "notebook", "numpy", "pandas", "scikit-learn" and "scipy", were installed in the container. The great news here is that NVIDIA made sure that all these libraries were compatible with PyTorch so there should not be any version incompatibilities.

If necessary you may install extra packages that your deep learning code will use. For that you should create a virtual environment. Here we will call it "venv\_pytorch\_gpu", but you may choose another name:

```
Singularity> python -m venv --system-site-packages venv_pytorch_gpu
```

Activate the virtual environment:

```
Singularity> source venv_pytorch_gpu/bin/activate
```

To install for example "captum":

```
(venv_pytorch_gpu) Singularity> pip install captum
```

Deactivate your virtual environment and logout from singularity and the GPU node:

```
(venv_pytorch_gpu) Singularity> deactivate
Singularity> exit
exit
```

#### Comments

##### Reproducibility

The container version specifies all Python libraries versions, ensuring consistency across different environments. If you also use a virtual environment and want to make your installation more reproducible, you may proceed as follows:

1\. Create a file called "requirements.txt" and write the package names inside. You may also specify the package versions. For example:

```
captum==0.7.0
```

2\. Proceed as above, but instead of installing the packages individually, type

```
pip install -r requirements.txt
```

##### Build your own container

Go to the webpage: [https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html)

Click on the latest release, which is "PyTorch Release 24.05" at the time we're writing this documentation, and scroll down to see the table "NVIDIA PyTorch Container Versions". It will show you the container versions and associated PyTorch versions. For exemple, if you want to use PyTorch 2.4 you could select the container 24.05.

Go to the webpage: [https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags)  
   
Select the appropriate container, for 24.05 it is "nvcr.io/nvidia/pytorch:24.05-py3". Do not choose any "-igpu" containers because they do not work on the UNIL clusters.

Choose a name for the container, for example "pytorch-ngc-24.05-2.4.sif", and create the following file by using your favorite editor:

```
cd /scratch/username/

vi pytorch-ngc.def
```

```
Bootstrap: docker
From: nvcr.io/nvidia/pytorch:24.05-py3

%post
    apt-get update && apt -y upgrade
    apt-get install -y bash wget gzip locales virtualenv git
    sed -i '/^#.* en_.*.UTF-8 /s/^#//' /etc/locale.gen
    sed -i '/^#.* fr_.*.UTF-8 /s/^#//' /etc/locale.gen
    locale-gen
```

Note that if you choose a difference container version, you will need to replace "24.05" by the appropriate container version in the script.

You can now download the container:

```
module load singularityce/4.1.0

export SINGULARITY_DISABLE_CACHE=1

singularity build --fakeroot pytorch-ngc-24.05-2.4.sif pytorch-ngc.def

mv pytorch-ngc-24.05-2.4.sif /work/PATH_TO_YOUR_PROJECT
```

That's it. You can then use it as it was explained above.

Warning: Do not log into a GPU node for building a singularity container, it will not work. But of course you will need to log into a GPU node to use it as shown below.

##### Use a virtual environment

Using containers is convenient because it is often difficult to install PyTorch directly within a virtual environment. The reason is that PyTorch has several dependencies and we must load or install the correct versions of them. Here are some instructions:

```
cd /work/PATH_TO_YOUR_PROJECT

Sinteractive -m 4G -G 1

module load python/3.10.13 cuda/11.8.0 cudnn/8.7.0.84-11.8

python -m venv venv_pytorch_gpu
 
source venv_pytorch_gpu/bin/activate

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
```

#### Run your deep learning code

To test your deep learning code (maximum 1h), say "my\_deep\_learning\_code.py", you may use the interactive mode:

```
cd /PATH_TO_YOUR_CODE/

Sinteractive -m 4G -G 1

module load singularityce/4.1.0

export SINGULARITY_BINDPATH="/scratch,/dcsrsoft,/users,/work,/reference"

singularity run --nv /dcsrsoft/singularity/containers/pytorch/pytorch-ngc-24.05-2.4.sif

source /work/PATH_TO_YOUR_PROJECT/venv_pytorch_gpu/bin/activate
```

Run your code:

```
python my_deep_learning_code.py
```

or copy/paste your code inside a python environment:

```
python

copy/paste your code
```

Once you have finished testing your code, you must close your interactive session (by typing exit), and then run it on the cluster by using an sbatch script, say "my\_sbatch\_script.sh":

```
#!/bin/bash -l
#SBATCH --account your_account_id
#SBATCH --mail-type ALL
#SBATCH --mail-user firstname.surname@unil.ch

#SBATCH --chdir /scratch/username/
#SBATCH --job-name my_deep_learning_job
#SBATCH --output my_deep_learning_job.out

#SBATCH --partition gpu
#SBATCH --gres gpu:1
#SBATCH --gres-flags enforce-binding
#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 1
#SBATCH --mem 10G
#SBATCH --time 01:00:00

module load singularityce/4.1.0

export SINGULARITY_BINDPATH="/scratch,/dcsrsoft,/users,/work,/reference"

# To use only singularity
export singularity_python="singularity run --nv /dcsrsoft/singularity/containers/pytorch/pytorch-ngc-24.05-2.4.sif python"

# To use singularity and virtual environment
export singularity_python="singularity run --nv /dcsrsoft/singularity/containers/pytorch/pytorch-ngc-24.05-2.4.sif /work/PATH_TO_YOUR_PROJECT/venv_pytorch_gpu/bin/python"

$singularity_python /PATH_TO_YOUR_CODE/my_deep_learning_code.py
```

To launch your job:

```
cd $HOME/PATH_TO_YOUR_SBATCH_SCRIPT/

sbatch my_sbatch_script.sh
```

### TensorBoard

You may use TensorBoard with PyTorch by looking at the documentation   
  
[https://pytorch.org/tutorials/recipes/recipes/tensorboard\_with\_pytorch.html](https://pytorch.org/tutorials/recipes/recipes/tensorboard_with_pytorch.html)

and by adapting slightly the instructions above (see TensorBoard in TensorFlow and Keras).