---
tags: DeiC
---
```yml
Track: Spor 2 - HPC og eScience
Topic: HPC success stories
Title: Paneldebat: HPC i humaniora/samfundsvidenskab
Contact:
```
# DeiC 21 Panel #
**The SSH require specific IT skills.** It is often assumed that the social sciences and humanities can borrow or repurpose the computational techniques developed elsewhere. That is true to some extent. But the SSH also have specific needs, that do not meet the goals or perspectives of other fields. The SSH also need to invent their own computational methods sometimes, which is double the difficulty: SSH scholars may feel lost in existing computational methods, and IT engineers or CS scholars may feel lost or disconnected within a SSH lab.
<!---
* how does this differ from say most areas in health science or geography, biology ... areas that are not defined by computational approaches?
--->
**Critical proximity.** Many SSH scholars are critical of computational methods, especially when they have been borrowed from the CS. This criticism is, of course, legitimate and even necessary. But many SSH scholars underestimate that _critical proximity_ can be more productive than critical distance. One can better understand the specificities and limitations of computational methods by engaging with them, rather than refusing to. Also, the fact seems to be that close engagements with tools and techniques can enable SSH researchers to both imagine how tools could be productively redesigned and, indeed, how their research insterests could be operationalized in new ways.
**Big data misunderstanding.** What many SSH scholars mean, by big data, is data that were created for non-scientific goals and later repurposed for research (e.g. social media data, what Kitchin refers to as exhaust data) and data that requires quantitative or quali-quantitative methods (mixed methods). But that data is often small enough that it fits in a consumer laptop and can be processed by Jupyter scripts, for instance. From there, the necessity of cloud strategies tends to be poorly understood by SSH scholars. The need for HPC is rarely understood (and perhaps sometimes bare there) in practice.
<!---
Big Data as such is a misunderstanding, the size of a data set is always relative to your resources. If what you have had is a
--->
**Understanding your own needs.** You can only desire something when you know that it exists. For SSH scholars who have never been involved in HPC projects, it is impossible to conceive the HPC infrastructure as an opportunity. A first step is to have data that has enough volume to be relevant for computational methods (e.g. training an ML model). A second step is, when you have big data, to recognize what computational methods can offer you. A third step is integrating these methods in the commonly accepted practices of your field.
**Epistemological trojan horses.** There can be a tendency in 'softer' branches SSH to think that, with the use of computational techniques comes a self-evident import of typically quantitativist research styles. Once you have large volumes of data and employ machines to help you analyse them, you also need to abide by a different epistemology. That, however, is debatable and arguably a product of needing to outsource data analysis to experts from other fields. It is perfectly imaginable, and to extent already being demonstrated, that anthropologists or historians would want their emply their computational tools for more qualitative, exploratory, or even hermeneutic purposes, but that requires them to take a much more active part in the design and implementation of computational tools.
**SSH scholars who try HPC die by a thousand cuts.** Designers wrongly believe that they can be engineers. Conversely, engineers wrongly believe that they can be designers. Cloud solutions for academia, for instance UCloud, are poorly designed. They are only functional for highly acculturated engineers who have learned to endure the pain points. For the typical SSH scholar, the fact that every button and feature break common design conventions in its own special way creates an accumulation of problems that becomes impossible to overcome.
***
### Appendix
*Note from Mathieu about example of pain points in UCloud. This is not a complaint. This is to show where the problem is, as possible discussion material.*
* *The button to delete a job requires you to keep the button pushed for a few seconds, which breaks the usage convention.*
* *Deleting a file fails if a file with the same name is already in the trash, unlike how it works on Mac and Windows.*
* *Uploading a folder only uploads its content, not the folder itself.*
* *Downloading a file changes its name (spaces become underscores)*
* *Uploaded files do not appear unless the web page is refreshed.*
* *Multiple file select is possible, but it works differently from Windows, MacOS, iOS and Android at the same time.*
* *Jupyter notebooks seem to keep a virtual UI with cell results, but this does not work if the browser is closed, which defeats the point of a cloud visual interface.*
<!---
* most of these issues have to do with the FS (Ceph), which UCloud is in the process of changing...
--->
*The point is: each of these micro issues is just a paper cut, yet SSH scholars will often drop trying to use the tool because the pain becomes too great before a usable result is obtained.*
***
### Iza's points
1. **People**
- naive users vs self-taught users vs expert users
Products can have very different audiences, and it is hard to predict them. It is usually unknown whether the technique needs to be packaged into a point-and-click interface that enables naive users to take advantage of it or whether the customers may be fully trained computer scientists employed on SSH projects. The balance between friendliness and giving users more freedom is difficult to strike.
- interdisciplinarity
In the 10 years of working at the interface between CompSci and SSH I have never witnessed a case where putting specialists from both disciplines in one room in the hope they will figure it out between themselves worked. Not in health sciences, nor in ecology, or humanities. The distance is simply too great. What does work are CompSci specialising in SHH working together with SHH specialising in CompSci. Alternatively the project needs a hybrid specialist that is both.
- hybrid scientists handicap
Career paths for computational specialists in SSH and SSH specialists in computational fields are poorly developed and those that follow these specialisations can become severely hendicaped because of their interdisciplinarity. Their research outcomes are often dismissed (in both disciplines) and they fall in-between departments. This disincentivises potential adopters on both sides.
- skills developement: good initial support, then nothing for intermediate and advanced users
While there are many opportunities for introductory training in data science, simulation, HPC (maybe less so), there is no equivalent level of training for intermediate and advanced levels. In short, most users are taught enough coding to make a graph in python but beyond that they're on their own. Support for hybrid scientists is limited.
2. **Access**
- institutional access
Who pays for what and what are the access routes? How informal the access route is? Does one need to know someone who knows someone? Is support given as a personal favour?
- technical skill access
Do customers have the awarness of the products and their functions? What is the time/effort investment vs the perceived benefit?
- financial access
Cost of HPC access. Indirect cost of HPC use, i.e., resources needed to develop skills, applications and methods. Given the distance between the existing expertise users have and the computational expertise they need to develop (whether themselves or through hires) the cost is high while the potential for significant outputs is moderate making it a risky activity.
3. **Technology**
- existing technology
Some of the existing technology does not translate into HPC thus limiting its use.
- specific applications
Research interest may differ from those already catered for in HPC. Bespoke solutions are too risky to develop.
4. **Solutions**
- investment in leaders
Technologies that have broken into particular fields have done so because of individuals who successfully translated them into their disciplines. They break the risk barrier (see 2. financial access) and should be invested in.
- better understanding of the needs and existing capacity
Development "over the heads" of users is unlikely to work.
- incentives to use HPC
There are currently almost none. For example, in the UK all departments had to chip in towards the cost of building and maintaining HPC capacity. SHH then had incentive to use it, since they wanted to get their money's worth. Career opportunities are high level incentives. Again example from the UK - 40% of Humanities PhD at my institution were funded through EPSRC and NERC (i.e., engineering/comp sci and natural sciences) research councils.
- bespoke support channels
SSH are likely to trip over different issues than other users. They need more and different support.
---
General comment: In my opinion, we are doing a pretty good job in supporting new users in DK. There are naturally many issues - we are early in the process - but they are of a general nature.
__traditional and 'new' users of HPC__
* not an ssh special case new are users everywhere
* novel users of HPC and interactive computing
* traditional HPC users:
* write code base for project
* formulate batch script
* submit to job queue
* new HPC users
* easy & interactive access
* collaborative development
* code and data sharing
* requires data-centric platform
* sandbox/VRE for data exploration
* ml/ai applications
* interactive access for experimentation and debuggging
__Level of Ambition__
What percentage of SSH researchers should use (DeiC) HPC (ambition/objective)? Out ambition defines the requirements to services &c. In traditional HPC the '_you must learn to crawl before you can walk_' doctrine, sets a somewhat high barrier to entry, and the new user profile users '_just want to fly with AI_' without caring about the intermediate steps. If we want to include all of them (ambition ~30-40\% of SSH), then we need to invest in a different kind of support (facilities, consulting, training). To what extent does this task/obligation belong on the provider's side of the equation?
Related to this, we have to consider relevant propagation mechanisms, because for new users it can be hard to conceive of HPC as a relevant option (although it may very well be). Do we want to intergrate HPC more broadly in education, and again to what extent (should Innovation Management, for instance, know how to deploy a notebook on a GPU node). Another issue, that should not be ignored is user diversity, new users increases the user diversity, to what extent do we want to actively facilitate this?
__Resource utilization__
This is an often overlooked problem, when comparing traditional (batch job) to interactive. Because many interactive users utilize compute resources as a workstation, the average utilization over a day, week, month ... result in a biased estimate. Instead we need to observe average and peak utilization (during the workday). If this issue is not considered, the interactive systems will look hughly ineffective and we are effectively biasing evaluation against new users.
__front and back office distinction is antiquated__
The distinction between front and back office, i.e., contact with user vs contact with software/hardware, is antequated, not least in the case of new users. Many 'advanced' new users from SSH (e.g., linguistics, archeology, business intelligence), are experts in their software and data. They will experience frustration and delays working with the front office and would be better served by wirking with the 'back office' directly. We want to shorten the systems development life cycle and provide continuous delivery with high software quality.
__interactive and interactive__
* interactive computing vs interactive development
* interactive development requires a more inclusive concept of support
* cf. DeiC origial five level model of support (from back to front office)
* level of support: _Exploration vs. Exploitation_
* support role: classical (IT support), collaborator (RSE), enablers/teachers, community of practice ()
* centralized or local
* economy
__data access and quality__
SSH need an ELIXIR (/AAAI platform) and B2FIND (discovery protal/metadata indexing service) for cultural heritage data. Furthermore, we need standardized ways of apriori assessing data quality and conducting bias analysis across data providers (e.g., national libraries). This is probably more of of a cultural and technical problem.