17th November, 2021
Attending: Greg, Josh, Hailey, Ryan, Davis
-
Josh: brain, logo, sharding, cog/tiledb
-
logo: RW pink gradient in full screen. WF: all nice
-
brain: nibabel community will have a call
-
sharding: work starting
-
tiledb: group metadata & hierarchy traversal
-
-
WF: proposal writing process (group effort)
- deadline in a couple of weeks.
-
DB: something for the docs
-
wanted to test python 3.9 - no idea
-
tox was breaking
-
JM: tox-conda? Maybe.
- WF: little cheatsheets / pamphlet size per project. Could use
something like that for AGU (presenting nczarr)
- one-pager? qrcode / gitter
- WF: DOI? No. Only zenodo:
-
-
JM: oh yeah, big endian crap
- see
-
WF: used qemu directly.
- GL: something similar with cibuildwheel for arch64 (GHA)
- see
-
GL: opened PR for V3 classes
-
shouldn’t break anything V2
-
shouldn’t release without the high-level methods
-
2 follow-up PRs with the rest of the sections
-
JM: perhaps we try to get 2.11 out before merging any of that
-
GL: for John K.
-
question from the RAPIDS team.
-
they have C++ code to read SVS for a client
-
questions about in general
-
reading on the CPU (similar to openslide; multithreaded)
- can read patches e.g. to train DL
-
there is a cuTIFF package internally (working towards it)
-
cf. John Lefman
- DB: at Janelia IO is an issue. direct GPU access would be great.
GL not yet. (That’s all TIFF based)
- Maybe a future GPU/image meeting, or just general chat.
-
-
JM: benchmarking
-
DB: will be nice to see converted datasets in shards..
-
JM: ASV or similar running on netcdf? WF: Not regularly. Can be
turned on. mostly want to see downward slope of
- will generate more benchmarks between HDF5 and NCZarr
-
-
JK: post-call (due to DST; 20:11 UK)
-
interest nvidia/zarr support
- see:
https://github.com/rapidsai/cucim/issues/94 (re: NGFF)
-
Q1: who’s adopting & how many people?
- Josh : companies, C#, etc.
-
Q2: what is the community missing?
- Moving to the V3 spec.
- for nvidia, whatever library they adopt reads/writes the V3 spec
- a lot of people in the cloud, so GPU-direct reader would need to be cloud-aware
-
“Versioned arrays”
- https://www.openmicroscopy.org/2020/12/01/zarr-hcs.html
etc.
-
Big V3 advertisement!
-
Babysitter/sabbatical money…
- Q3: Compression? zstd,
- Q4: which C++ library? Z5, xtensor-zarr, netcdf, tensorstore
-