Commit graph

601 commits

Author SHA1 Message Date
cc6679c609
batch data; use generator 2022-10-20 15:22:29 +01:00
d306853c42
use right daataset 2022-10-20 15:16:24 +01:00
59cfa4a89a
basename paths 2022-10-20 15:11:14 +01:00
4d8ae21a45
update cli help text 2022-10-19 17:31:42 +01:00
200076596b
finish train_predict 2022-10-19 17:26:40 +01:00
488f78fca5
pretrain_predict: default to parallel_reads=0 2022-10-19 16:59:45 +01:00
63e909d9fc
datasets: add shuffle=True/False to get_filepaths.
This is important because otherwise it SCAMBLES the filenames, which is a disaster for making predictions in the right order....!
2022-10-19 16:52:07 +01:00
fe43ddfbf9
start implementing driver for train_predict, but not finished yet 2022-10-18 19:37:55 +01:00
4ceec73e5b
Merge branch 'main' of git.starbeamrainbowlabs.com:sbrl/PhD-Rainfall-Radar 2022-10-18 19:07:23 +01:00
0c11ddca4b
rainfallwrangler does NOT mess up the ordering of the data 2022-10-18 19:07:14 +01:00
b3ea189d37
segmentation: softmax the output 2022-10-13 21:02:57 +01:00
f121bfb981
fixup summaryfile 2022-10-13 17:54:42 +01:00
5c35c0cee4
model_segmentation: document; remove unused args 2022-10-13 17:50:16 +01:00
f12e6ab905
No need for a CLI arg for feature_dim_in - metadata should contain this 2022-10-13 17:37:16 +01:00
e201372252
write quick Jupyter notebook to test data
....I'm paranoid
2022-10-13 17:27:17 +01:00
ae53130e66
layout 2022-10-13 14:54:20 +01:00
7933564c66
typo 2022-10-12 17:33:54 +01:00
dbe4fb0eab
train: add slurm job file 2022-10-12 17:27:10 +01:00
6423bf6702
LayerConvNeXtGamma: avoid adding an EagerTensor to config
Very weird how this is a problem when it wasn't before..
2022-10-12 17:12:07 +01:00
32f5200d3b
pass model_arch properly 2022-10-12 16:50:06 +01:00
5933fb1061
fixup 2022-10-11 19:23:41 +01:00
c45b90764e
segmentation: adds xxtiny, but unsure if it's small enough 2022-10-11 19:22:37 +01:00
f4a2c742d9
typo 2022-10-11 19:19:23 +01:00
11f91a7cf4
train: add --arch; default to convnext_i_xtiny 2022-10-11 19:18:01 +01:00
5666c5a0d9
typo 2022-10-10 18:12:51 +01:00
131c0a0a5b
pretrain-predict: create dir if not exists 2022-10-10 18:00:55 +01:00
deede32241
slurm-pretrain: limit memory usage 2022-10-10 17:45:29 +01:00
13a8f3f511
pretrain-predict: only queue pretrain-plot if I we output jsonl 2022-10-10 17:11:10 +01:00
ffcb2e3735
pretrain-predict: queue for the actual input 2022-10-10 16:53:28 +01:00
f883986eaa
Bugfix: modeset to enable TFRecordWriter instead of bare handle 2022-10-06 20:07:59 +01:00
e9a8e2eb57
fixup 2022-10-06 19:23:31 +01:00
9f3ae96894
finish wiring for --water-size 2022-10-06 19:21:50 +01:00
5dac70aa08
typo 2022-10-06 19:17:03 +01:00
2960d3b645
exception → warning 2022-10-06 18:26:40 +01:00
0ee6703c1e
Add todo and comment 2022-10-03 19:06:56 +01:00
2b182214ea
typo 2022-10-03 17:53:10 +01:00
92c380bff5
fiddle with Conv2DTranspose
you need to set the `stride` argument to actually get it to upscale..... :P
2022-10-03 17:51:41 +01:00
d544553800
fixup 2022-10-03 17:33:06 +01:00
058e3b6248
model_segmentation: cast float → int 2022-10-03 17:31:36 +01:00
04e5ae0c45
model_segmentation: redo reshape
much cheese was applied :P
2022-10-03 17:27:52 +01:00
deffe69202
typo 2022-10-03 16:59:36 +01:00
fc6d2dabc9
Upscale first, THEN convnext... 2022-10-03 16:38:43 +01:00
6a0790ff50
convnext_inverse: add returns; change ordering 2022-10-03 16:32:09 +01:00
fe813cb46d
slurm predict: fix plotting subcall 2022-10-03 16:03:26 +01:00
e51087d0a9
add reshape layer 2022-09-28 18:22:48 +01:00
a336cdee90
and continues 2022-09-28 18:18:10 +01:00
de47a883d9
missing units 2022-09-28 18:17:22 +01:00
b5e08f92fe
the long night continues 2022-09-28 18:14:09 +01:00
dc159ecfdb
and again 2022-09-28 18:11:46 +01:00
4cf0485e32
fixup... again 2022-09-28 18:10:11 +01:00