Commit graph

213 commits

Author SHA1 Message Date
a7a475dcd1
debug 2 2022-11-11 18:38:07 +00:00
bf2f6e9b64
debug logging
it begins again
2022-11-11 18:31:40 +00:00
481eeb3759
mono: fix dataset preprocessing
rogue dimension
2022-11-11 18:31:27 +00:00
9035450213
mono: instantiate right model 2022-11-11 18:28:29 +00:00
8ac5159adc
dataset_mono: simplify param passing, onehot+threshold water depth data 2022-11-11 18:23:50 +00:00
3a3f7e85da
typo 2022-11-11 18:03:09 +00:00
3313f77c88
Add (untested) mono rainfall → water depth model
* sighs *
Unfortunately I can't seem to get contrastive learning to work.....
2022-11-10 22:36:11 +00:00
9384b89165
model_segmentation: spare → normal crossentropy, activation functions at end 2022-11-10 20:53:37 +00:00
b6676e7361
switch from sparse to normal crossentropy 2022-11-10 20:50:56 +00:00
d8be26d476
Merge branch 'main' of git.starbeamrainbowlabs.com:sbrl/PhD-Rainfall-Radar 2022-11-10 20:49:01 +00:00
b03388de60
dataset_segmenter: DEBUG: fix water shape 2022-11-10 20:48:21 +00:00
daf691bf43
typo 2022-11-10 19:55:00 +00:00
0aa2ce19f5
read_metadata: support file inputs as well as dirs 2022-11-10 19:53:30 +00:00
44ad51f483
CallbackNBatchCsv: bugfix .sort() → sorted() 2022-11-04 16:40:21 +00:00
1375201c5f
CallbackNBatchCsv: open_handle mode 2022-11-03 18:29:00 +00:00
f2ae74ce7b
how could I be so stupid..... round 2 2022-11-02 17:38:26 +00:00
5f8d6dc6ea
Add metrics every 64 batches
this is important, because with large batches it can be difficult to tell what's happening inside each epoch.
2022-10-31 19:26:10 +00:00
cf872ef739
how could I be so *stupid*...... 2022-10-31 18:40:58 +00:00
da32d75778
make_callbacks: display steps, not samples 2022-10-31 18:36:28 +00:00
172cf9d8ce
tweak 2022-10-31 18:19:43 +00:00
dbe35ee943
loss: comment l2 norm 2022-10-31 18:09:03 +00:00
5e60319024
fixup 2022-10-31 17:56:49 +00:00
b986b069e2
debug party time 2022-10-31 17:50:29 +00:00
458faa96d2
loss: fixup 2022-10-31 17:18:21 +00:00
55dc05e8ce
contrastive: comment weights that aren't needed 2022-10-31 16:26:48 +00:00
1b489518d0
segmenter: add LayerStack2Image to custom_objects 2022-10-26 17:05:50 +01:00
48ae8a5c20
LossContrastive: normalise features as per the paper 2022-10-26 16:52:56 +01:00
843cc8dc7b
contrastive: rewrite the loss function.
The CLIP paper *does* kinda make sense I think
2022-10-26 16:45:45 +01:00
fad1399c2d
convnext: whitespace 2022-10-26 16:45:20 +01:00
1d872cb962
contrastive: fix initial temperature value
It should be 1/0.07, but we had it set to 0.07......
2022-10-26 16:45:01 +01:00
f994d449f1
Layer2Image: fix 2022-10-25 21:32:17 +01:00
6a29105f56
model_segmentation: stack not reshape 2022-10-25 21:25:15 +01:00
98417a3e06
prepare for NCE loss
.....but Tensorflow's implementation looks to be for supervised models :-(
2022-10-25 21:15:05 +01:00
bb0679a509
model_segmentation: don't softmax twice 2022-10-25 21:11:48 +01:00
f2e2ca1484
model_contrastive: make water encoder significantly shallower 2022-10-24 20:52:31 +01:00
a8b101bdae
dataset_predict: add shape_water_desired 2022-10-24 18:05:13 +01:00
8195318a42
SparseCategoricalAccuracy: losses → metrics 2022-10-21 16:51:20 +01:00
612735aaae
rename shuffle arg 2022-10-21 16:35:45 +01:00
c98d8d05dd
segmentation: use the right accuracy 2022-10-21 16:17:05 +01:00
af26964c6a
batched_iterator: reset i_item after every time 2022-10-21 15:35:43 +01:00
42aea7a0cc
plt.close() fixup 2022-10-21 15:23:54 +01:00
12dad3bc87
vis/segmentation: fix titles 2022-10-21 15:22:35 +01:00
0cb2de5d06
train-preedict: close matplotlib after we've finished
they act like file handles
2022-10-21 15:19:31 +01:00
3f7db6fa78
fix embedding confusion 2022-10-21 15:15:59 +01:00
59cfa4a89a
basename paths 2022-10-20 15:11:14 +01:00
200076596b
finish train_predict 2022-10-19 17:26:40 +01:00
63e909d9fc
datasets: add shuffle=True/False to get_filepaths.
This is important because otherwise it SCAMBLES the filenames, which is a disaster for making predictions in the right order....!
2022-10-19 16:52:07 +01:00
fe43ddfbf9
start implementing driver for train_predict, but not finished yet 2022-10-18 19:37:55 +01:00
b3ea189d37
segmentation: softmax the output 2022-10-13 21:02:57 +01:00
f121bfb981
fixup summaryfile 2022-10-13 17:54:42 +01:00
5c35c0cee4
model_segmentation: document; remove unused args 2022-10-13 17:50:16 +01:00
f12e6ab905
No need for a CLI arg for feature_dim_in - metadata should contain this 2022-10-13 17:37:16 +01:00
ae53130e66
layout 2022-10-13 14:54:20 +01:00
6423bf6702
LayerConvNeXtGamma: avoid adding an EagerTensor to config
Very weird how this is a problem when it wasn't before..
2022-10-12 17:12:07 +01:00
32f5200d3b
pass model_arch properly 2022-10-12 16:50:06 +01:00
c45b90764e
segmentation: adds xxtiny, but unsure if it's small enough 2022-10-11 19:22:37 +01:00
11f91a7cf4
train: add --arch; default to convnext_i_xtiny 2022-10-11 19:18:01 +01:00
e9a8e2eb57
fixup 2022-10-06 19:23:31 +01:00
9f3ae96894
finish wiring for --water-size 2022-10-06 19:21:50 +01:00
5dac70aa08
typo 2022-10-06 19:17:03 +01:00
2960d3b645
exception → warning 2022-10-06 18:26:40 +01:00
0ee6703c1e
Add todo and comment 2022-10-03 19:06:56 +01:00
2b182214ea
typo 2022-10-03 17:53:10 +01:00
92c380bff5
fiddle with Conv2DTranspose
you need to set the `stride` argument to actually get it to upscale..... :P
2022-10-03 17:51:41 +01:00
d544553800
fixup 2022-10-03 17:33:06 +01:00
058e3b6248
model_segmentation: cast float → int 2022-10-03 17:31:36 +01:00
04e5ae0c45
model_segmentation: redo reshape
much cheese was applied :P
2022-10-03 17:27:52 +01:00
deffe69202
typo 2022-10-03 16:59:36 +01:00
fc6d2dabc9
Upscale first, THEN convnext... 2022-10-03 16:38:43 +01:00
6a0790ff50
convnext_inverse: add returns; change ordering 2022-10-03 16:32:09 +01:00
e51087d0a9
add reshape layer 2022-09-28 18:22:48 +01:00
a336cdee90
and continues 2022-09-28 18:18:10 +01:00
de47a883d9
missing units 2022-09-28 18:17:22 +01:00
b5e08f92fe
the long night continues 2022-09-28 18:14:09 +01:00
dc159ecfdb
and again 2022-09-28 18:11:46 +01:00
4cf0485e32
fixup... again 2022-09-28 18:10:11 +01:00
030d8710b6
fixup 2022-09-28 18:08:31 +01:00
4ee7f2a0d6
add water thresholding 2022-09-28 18:07:26 +01:00
41ba980d69
segmentationP implement dataset parser 2022-09-28 17:19:21 +01:00
e9e6139c7a
typo 2022-09-28 16:28:18 +01:00
d6ff3fb2ce
pretrain_predict fix write mode 2022-09-27 17:38:12 +01:00
f95fd8f9e4
pretrain-predict: add .tfrecord output function 2022-09-27 16:59:31 +01:00
30b8dd063e
fixup 2022-09-27 15:54:37 +01:00
3cf99587e4
Contraster: add gamma layer to load_model 2022-09-27 15:53:52 +01:00
d59de41ebb
embeddings: change title rendering; make even moar widererer
We need to see that parallel coordinates  plot in detail
2022-09-23 18:56:39 +01:00
5252a81238
vis: don't call add_subplot 2022-09-20 19:06:21 +01:00
a552cc4dad
ai vis: make parallel coordinates wider 2022-09-16 18:51:49 +01:00
a70794e661
umap: no min_dist 2022-09-16 17:09:09 +01:00
5778fc51f7
embeddings: fix title; remove colourmap 2022-09-16 17:08:04 +01:00
fcab227f6a
cheese: set label for everything to 1 2022-09-16 16:42:05 +01:00
1e35802d2b
ai: fix embed i/o 2022-09-16 16:02:27 +01:00
ed94da7492
fixup 2022-09-16 15:51:26 +01:00
366db658a8
ds predict: fix filenames in 2022-09-16 15:45:22 +01:00
e333dcba9c
tweak projection head 2022-09-16 15:36:01 +01:00
6defd24000
bugfix: too many values to unpack 2022-09-15 19:56:17 +01:00
e3c8277255
ai: tweak the segmentation model structure 2022-09-15 19:54:50 +01:00
bd64986332
ai: implement batched_iterator to replace .batch()
...apparently .batch() means you get a BatchedDataset or whatever when you iterate it like a tf.function instead of the actual tensor :-/
2022-09-15 19:16:38 +01:00
ccd256c00a
embed rainfall radar, not both 2022-09-15 17:37:04 +01:00
2c74676902
predict → predict_on_batch 2022-09-15 17:31:50 +01:00
f036e79098
fixup 2022-09-15 17:09:26 +01:00