Computer and Storage List
Computer List
name | office | info updated | user | cores | processor | RAM | OS | Video Ports | Displays | Software | Purchased |
---|---|---|---|---|---|---|---|---|---|---|---|
carpathia | 379 | Tests | 6 | Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz | 32GB | Ubuntu 20.04.5 | 2014 | ||||
liminal | 379 | Alex | 6 | Intel(R) Core(TM) i7-7800X CPU @ 3.50GHz | 128GB | Ubuntu 20.04.5 | QChem
For FCIDUMPS: export QC=qclocal; . ~ajwt3/code/qchem/qcsetup.bash NB(22/12/22) non-canonical RHF integral dumps may be incorrect (use a UHF calc and read it in to RHF). |
2017 | |||
hypatia | G.05 | NCP [Doug, Tom, Anna] | 6 | Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz | 32GB | Ubuntu 20.04.6 | 2014 | ||||
serenity | 376 | Andreea | 6 | Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz | 64GB | Ubuntu 22.04 | 2015 | ||||
sandstone | 378 | 01/04/2025 | Kripa | 6 | Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz | 32GB | Ubuntu 22.04.5 | GTX 750 Ti | QChem
source /home/hynl2/code/qcsetup.bash |
2015 | |
gritstone | UG11 | Lijun, [Theo, Brian] | 6 | Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz | 32GB | Ubuntu 20.04 | 2015 | ||||
moonraker | UG03a | Charlie [Moritz, Max, Nick
Benjamin] |
4 | Intel(R) Xeon(R) CPU E3-1270 v5 @ 3.60GHz | 64GB | Ubuntu 20.04.5 | QChem
export QC_EXT_LIBS=/home/hynl2/code/extlib; source /home/hynl2/.qcsetup |
2016 | |||
obsidian | 378 | Bence [Eline, Lila, Isha, Zian] | 6 | Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz | 64GB | Ubuntu 20.04.5 | NVIDIA GeForce GTX 750 Ti
(Compute Capability 5.0) |
2016 | |||
hylas | 378 | Rowan [Juan, Fabio] | 6 | Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz | 64GB | Ubuntu 20.04 | 2016 | ||||
cerberus | UG11 | Alex, Bence | 6 | Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz | 32GB | CentOS 7 [FPGA development board host] | |||||
chucksty | 110 | Jack, [Theo, King, David] | 6 | Intel(R) Core(TM) i7-7800X CPU @ 3.50GHz | 128GB | Ubuntu 20.04.5 | QChem
source /home/maf63/qchem-public/qcsetup |
2017 | |||
chesterian | 360 | Reka, [Daniel, Bang, Tarik] | 6 | Intel(R) Core(TM) i7-7800X CPU @ 3.50GHz | 128GB | Ubuntu 20.04.5 | QChem
. /home/cbh31/code/qcsetup.public/qcselectversion.sh |
2017 | |||
behemoth | 378 | [Yi, Brian, Arta] | 8 | Intel(R) Xeon(R) Silver 4208 CPU @ 2.10GHz | 256GB | Ubuntu 20.04.5 | /scratch2 has 18Tb of scratch | QChem
source /home/maf63/qchem-public/qcsetup MRCC source /home/ajwt3/code/mrcc |
2020 | ||
nemesis | 378 | Constance | 6 | Intel(R) Core(TM) i7-4930X CPU @ 3.40GHz | 16GB | Ubuntu 20.04.5 | |||||
chiron | UG03A | Chiara | 10 | Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz | 96GB | Ubuntu 20.04 | 2021 | ||||
topaz | 360 | Lila | 8 | Intel Core i9-11900 2.5GHz 8 Core | 128GB | Ubuntu 20.04 | NVIDIA GeForce RTX 3080 | 2022 | |||
cerebro | Alavi & Thom Groups | 12 x 20
16 x 3 [currently] |
2x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
2x Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz |
24GB
64GB |
Rocks 6.2 (CentOS 6.9) SLURM queuing | QChem
source /home/hynl2/code/qchemsetup.bash MRCC source /home/ajwt3/code/mrcc_2023 | |||||
CSD3 | University Tier-2 | Cacade Lake 56 x 672
76 x 544 Ice Lake |
2x Intel(R) Xeon Platinum CPU 8276 @ 2.20GHz
2x Intel(R) Xeon(R) Platinum 8368Q CPU @ 2.60GHz |
192 or 384GB
256 or 512GB |
Rocky Linux 8 SLURM queuing (36h max) | Free core hours are available - talk to AJWT | QChem
source /rds/project/ajwt3/rds-ajwt3-thom1/qchem_public/qcsetup.bash | ||||
nest | CUC3 Group cluster | 40 x 20 | 2x Cascade Lake Intel(R) Xeon Gold CPU 6248 @ 2.50GHz | 192GB | CentOS Linux release 7.9.2009 (Core) SLURM queuing | QChem
source /home/maf63/code/qcsetup.sh | |||||
rogue | CUC3 Group cluster | (8 nVidia V100 + 32 CPU) x 2 | 2x Sky Lake Intel(R) Xeon Gold CPU 6130 @ 2.10GHz | 192GB | CentOS Linux release 7.9.2009 (Core) SLURM queuing | ||||||
archer-2 | National Tier-1 Supercomputer | 128 x 5848 | 2 x AMD EPYC Zen2 (Rome) 64-core CPUs @ 2.2GHz | 256GB and 512GB |
Machine status can be monitored at : https://hobbit.ch.cam.ac.uk/xymon/workstations/workstationsThom/workstationsThomLinux/
Notes
To find out your OS version, run
lsb_release -a
To determine the RAM, run
head -1 /proc/meminfo
To find out core counts, run
cat /proc/cpuinfo
NB the number of 'processors' may be different from the number of cores owing to hyperthreading. The 'cpu cores' value is the one to take for single CPU machines.
Hobbit may also have some useful information.
Group computer reps can manage group entries in the department database and there's a hardware inventory and a space report too.
Storage
A common cause of running out of storage on your workstation is anaconda which puts stuff in /home. This can be safely moved to /scratch and a symbolic link.
cd $HOME mv .conda /scratch/$USER ln -s /scratch/$USER/.conda
To find out how much storage you have available and what files/directories are taking up space, the following commands are useful. The first one shows how much space is used/available on each partition, and the second shows the size of everything in the current directory.
df -h du -sh * | sort -hr
If you can't find any fiiles in /scratch/$USER/thom-fs-common you might need to authenticate with a password. You can do this if you are using key authentication with
ssh -oPubkeyauthentication=no localhost
Name | Type | Amount | Notes |
---|---|---|---|
/home/$USER | local disk | ~50Gb per person (changed to ~100GB after upgrade to 20.04) | Backed up with snapshots - Theory RIG policy |
/scratch/$USER | local disk | ~1Tb+ depending on computer | NOT BACKED UP |
/scratch/$USER/thom-fs-nethome
/scratch/$USER/thom-fs-common |
Chemistry network drive | 2.3T | Backed up with snapshots - Theory RIG policy |
/scratch/$USER/ifs-thom | Former UIS Mount - now located at /scratch/$USER/thom-fs/old-ifs-thom | 6144Gb | Read-only |
/scratch/$USER/theory-fs | Chemistry network drive | ~50Gb per person | Backed up with snapshots - Theory RIG policy |
cerebro:/filestore | Local RAID array | 36950Gb | Backed up with snapshots - Theory RIG policy |
Theory RIG backup policy
From https://www.ch.cam.ac.uk/computing/managed-linux-workstations-faq
have a few backups taken over the last 24 hours
then, about one backup per day for the previous week
then, about one backup per week for the previous month
then, about one backup per month for the previous few months