Commit graph

461 commits

Author SHA1 Message Date
91846079b2
deeplabv3+ tesst: add shebang 2022-12-13 12:56:14 +00:00
8866960017
TEST SCRIPT: deeplabv3
ref https://keras.io/examples/vision/deeplabv3_plus/
dataset ref https://drive.google.com/uc?id=1B9A9UCJYMwTL4oBEo4RZfbMZMaZhKJaz

(the code is *terrible* spaghetti....!)
2022-12-12 19:20:07 +00:00
4e4d42a281
LossDice: add comment 2022-12-12 18:34:20 +00:00
449bc425a7
LossDice: explicitly cast inputs to float32 2022-12-12 17:20:32 +00:00
dbf8f5617c
drop activation function in last layers 2022-12-12 17:20:04 +00:00
bcd2f1251e
LossDice: Do 1 - thing instead of -thing 2022-12-09 19:41:32 +00:00
d0dbc50bb7
debug 2022-12-09 19:33:28 +00:00
2142bb039c
again 2022-12-09 19:30:01 +00:00
7000b0f193
fixup 2022-12-09 19:23:35 +00:00
85012d0616
fixup 2022-12-09 19:18:03 +00:00
719d8e9819
strip channels layer at end 2022-12-09 19:11:00 +00:00
0129c35a35
LossDice: remove weird K.* functions 2022-12-09 19:06:26 +00:00
659fc97fd4
fix crash 2022-12-09 18:39:27 +00:00
e22c0981e6
actually use dice loss 2022-12-09 18:35:17 +00:00
649c262960
mono: switch loss from crossentropy to dice 2022-12-09 18:13:37 +00:00
7fd7c750d6
jupyter: identity test
status: FAILED, as usual....!
Don't worry though, 'cause we has a *planses*..... MUHAHAHAHAHAHAHA
* cue evil laugh *
2022-12-09 18:07:56 +00:00
cf9e8aa237
jupyter: convnext-mono identity test 2022-12-09 15:50:27 +00:00
2a1772a211
confvnext_intrevse: add shallow 2022-12-08 19:10:12 +00:00
c27869630a
I hate VSCode's git commit interface
it doesn't let you ammend
2022-12-08 18:58:54 +00:00
b3345963f3
missing arg pass 2022-12-08 18:58:32 +00:00
3dde9b69da
fixup 2022-12-08 18:56:32 +00:00
6fce39f696
WHY?!?!?! 2022-12-08 18:55:53 +00:00
26766366fc
I hate the python code intelligence
it's bad
2022-12-08 18:55:15 +00:00
ff56f591c7
I hate python 2022-12-08 18:53:37 +00:00
d37e7224f5
train-mono: tidy up arg passing 2022-12-08 18:47:03 +00:00
b53db648bf
fixup 2022-12-08 18:31:42 +00:00
18c0210704
typo 2022-12-08 17:00:25 +00:00
a3c9416cf0
LossCrossentropy: don't sum 2022-12-08 16:57:11 +00:00
08046340f4
dataset_mono: normalise heightmap 2022-12-08 16:10:34 +00:00
d997157f55
dataset_mono: log when using heightmap 2022-12-06 19:30:11 +00:00
d0f2e3d730
readfile: do transparent gzip by default
....but there's a glad to turn it off if needed
2022-12-06 19:27:39 +00:00
eac6472c97
Implement support for (optionally) taking a heightmap in 2022-12-06 18:55:58 +00:00
f92b2b3472
according to the equation it looks like it's 2 2022-12-02 17:22:46 +00:00
cad82cd1bc
CBAM: unsure if it's 1 ro 3 dense ayers in the shared mlp 2022-12-02 17:21:13 +00:00
62f6a993bb
implement CBAM, but it's UNTESTED
Convolutional Block Attention Module.
2022-12-02 17:17:45 +00:00
9d666c3b38
train mono: type=int → float 2022-12-01 15:39:44 +00:00
53dfa32685
model_mono: log learning rate 2022-12-01 15:10:51 +00:00
c384d55dff
add arg to adjust learning rate 2022-11-29 20:55:00 +00:00
8e23e9d341
model_segmenter: we're no longer using sparse 2022-11-29 19:28:27 +00:00
9a2b4c6838
dsseg: fix reshape/onehot ordering 2022-11-29 19:28:13 +00:00
df774146d9
dataset_segmenter: reshape, not squeeze 2022-11-29 19:24:54 +00:00
77b8a1a8db
dataset_segmenter: squeeze 2022-11-29 19:16:15 +00:00
01101ad30b
losscrossentropy: return the reduced value * facepalm * 2022-11-29 19:07:08 +00:00
37f196a785
LossCrossentropy: add kwargs 2022-11-29 15:40:35 +00:00
838ff56a3b
mono: fix loading checkpoint 2022-11-29 15:25:11 +00:00
dba6cbffcd
WHY. * facepalms * 2022-11-28 19:33:42 +00:00
57b8eb93fb
fixup 2022-11-28 19:09:35 +00:00
6640a41bb7
almost got it....? it's not what I expected....! 2022-11-28 19:08:50 +00:00
f48473b703
fixup 2022-11-28 19:00:11 +00:00
f6feb125e3
this iss ome serious debugging.
This commit will produce an extremely large volume of output.
2022-11-28 18:57:41 +00:00
09f81b0746
train_mono: debug
this commit will generate a large amount of debug output.
2022-11-28 16:46:17 +00:00
f39e4ade70
LayerConvNextGamma: fix config serialisation bug
.....this is unlikely to be the problem as this bug is in an unused code path.
2022-11-25 21:16:31 +00:00
e7410fb480
train_mono_predict: limit label size to 64x64
that's the size the model predicts
2022-11-25 17:47:17 +00:00
51dd484d13
fixup 2022-11-25 16:55:45 +00:00
884c4eb150
rainfall_stats: formatting again 2022-11-24 19:08:07 +00:00
bfe038086c
rainfall_stats: formatting 2022-11-24 19:07:44 +00:00
7dba03200f
fixup 2022-11-24 19:06:48 +00:00
e5258b9c66
typo 2022-11-24 19:06:13 +00:00
64d646bb13
rainfall_stats: formatting 2022-11-24 19:05:35 +00:00
675c7a7448
fixup 2022-11-24 19:03:28 +00:00
afc1cdcf02
fixup 2022-11-24 19:02:58 +00:00
e4bea89c89
typo 2022-11-24 19:01:52 +00:00
a40cbe8705
rainfall_stats: remove unused imports 2022-11-24 19:01:18 +00:00
fe57d6aab2
rainfall_stats: initial implementation
this might reveal why we are having problems. If most/all the rainfall radar
data is v small numbers, normalising
might help.
2022-11-24 18:58:16 +00:00
3131b4f7b3
debug2 2022-11-24 18:25:32 +00:00
d55a13f536
debug 2022-11-24 18:24:03 +00:00
1f60f2a580
do_argmax 2022-11-24 18:11:03 +00:00
6c09d5254d
fixup 2022-11-24 17:57:48 +00:00
54a841efe9
train_mono_predict: convert to correct format 2022-11-24 17:56:07 +00:00
105dc5bc56
missing kwargs 2022-11-24 17:51:29 +00:00
1e1d6dd273
fixup 2022-11-24 17:48:19 +00:00
011e0aef78
update cli docs 2022-11-24 16:38:07 +00:00
773944f9fa
train_mono_predict: initial implementation 2022-11-24 16:33:50 +00:00
3a0356929c
mono: drop the sparse 2022-11-22 16:20:56 +00:00
7e8f63f8ba
fixup 2022-11-21 19:38:24 +00:00
ace4c8b246
dataset_mono: debug 2022-11-21 18:46:21 +00:00
527b34942d
convnext_inverse: kernel_size 4→2 2022-11-11 19:29:37 +00:00
0662d0854b
model_mono: fix bottleneck 2022-11-11 19:11:40 +00:00
73acda6d9a
fix debug logging 2022-11-11 19:08:38 +00:00
9da059d738
model shape logging 2022-11-11 19:03:37 +00:00
00917b2698
dataset_mono: log shapes 2022-11-11 19:02:43 +00:00
54ae88b1b4
in this entire blasted project I have yet to get the rotation of anything correct....! 2022-11-11 18:58:45 +00:00
a7a475dcd1
debug 2 2022-11-11 18:38:07 +00:00
bf2f6e9b64
debug logging
it begins again
2022-11-11 18:31:40 +00:00
481eeb3759
mono: fix dataset preprocessing
rogue dimension
2022-11-11 18:31:27 +00:00
9035450213
mono: instantiate right model 2022-11-11 18:28:29 +00:00
69a2d0cf04
fixup 2022-11-11 18:27:01 +00:00
65e801cf28
train_mono: fix crash 2022-11-11 18:26:25 +00:00
8ac5159adc
dataset_mono: simplify param passing, onehot+threshold water depth data 2022-11-11 18:23:50 +00:00
3a3f7e85da
typo 2022-11-11 18:03:09 +00:00
3313f77c88
Add (untested) mono rainfall → water depth model
* sighs *
Unfortunately I can't seem to get contrastive learning to work.....
2022-11-10 22:36:11 +00:00
9384b89165
model_segmentation: spare → normal crossentropy, activation functions at end 2022-11-10 20:53:37 +00:00
b6676e7361
switch from sparse to normal crossentropy 2022-11-10 20:50:56 +00:00
d8be26d476
Merge branch 'main' of git.starbeamrainbowlabs.com:sbrl/PhD-Rainfall-Radar 2022-11-10 20:49:01 +00:00
b03388de60
dataset_segmenter: DEBUG: fix water shape 2022-11-10 20:48:21 +00:00
daf691bf43
typo 2022-11-10 19:55:00 +00:00
0aa2ce19f5
read_metadata: support file inputs as well as dirs 2022-11-10 19:53:30 +00:00
aa7d9b8cf6
fixup 2022-11-10 19:46:09 +00:00
0894bd09e8
train_predict: add error message for parrams.json not found 2022-11-10 19:45:41 +00:00
44ad51f483
CallbackNBatchCsv: bugfix .sort() → sorted() 2022-11-04 16:40:21 +00:00
4dddcfcb42
pretrain_predict: missing \n 2022-11-04 16:01:28 +00:00
1375201c5f
CallbackNBatchCsv: open_handle mode 2022-11-03 18:29:00 +00:00
f2ae74ce7b
how could I be so stupid..... round 2 2022-11-02 17:38:26 +00:00
5f8d6dc6ea
Add metrics every 64 batches
this is important, because with large batches it can be difficult to tell what's happening inside each epoch.
2022-10-31 19:26:10 +00:00
cf872ef739
how could I be so *stupid*...... 2022-10-31 18:40:58 +00:00
da32d75778
make_callbacks: display steps, not samples 2022-10-31 18:36:28 +00:00
dfef7db421
moar debugging 2022-10-31 18:26:34 +00:00
172cf9d8ce
tweak 2022-10-31 18:19:43 +00:00
dbe35ee943
loss: comment l2 norm 2022-10-31 18:09:03 +00:00
5e60319024
fixup 2022-10-31 17:56:49 +00:00
b986b069e2
debug party time 2022-10-31 17:50:29 +00:00
458faa96d2
loss: fixup 2022-10-31 17:18:21 +00:00
55dc05e8ce
contrastive: comment weights that aren't needed 2022-10-31 16:26:48 +00:00
33391eaf16
train_predict/jsonl: don't argmax
I'm interested inthe raw values
2022-10-26 17:21:19 +01:00
74f2cdb900
train_predict: .list() → .tolist() 2022-10-26 17:12:36 +01:00
4f9d543695
train_predict: don't pass model_code
it's redundant
2022-10-26 17:11:36 +01:00
1b489518d0
segmenter: add LayerStack2Image to custom_objects 2022-10-26 17:05:50 +01:00
48ae8a5c20
LossContrastive: normalise features as per the paper 2022-10-26 16:52:56 +01:00
843cc8dc7b
contrastive: rewrite the loss function.
The CLIP paper *does* kinda make sense I think
2022-10-26 16:45:45 +01:00
fad1399c2d
convnext: whitespace 2022-10-26 16:45:20 +01:00
1d872cb962
contrastive: fix initial temperature value
It should be 1/0.07, but we had it set to 0.07......
2022-10-26 16:45:01 +01:00
f994d449f1
Layer2Image: fix 2022-10-25 21:32:17 +01:00
6a29105f56
model_segmentation: stack not reshape 2022-10-25 21:25:15 +01:00
98417a3e06
prepare for NCE loss
.....but Tensorflow's implementation looks to be for supervised models :-(
2022-10-25 21:15:05 +01:00
bb0679a509
model_segmentation: don't softmax twice 2022-10-25 21:11:48 +01:00
f2e2ca1484
model_contrastive: make water encoder significantly shallower 2022-10-24 20:52:31 +01:00
a6b07a49cb
count water/nowater pixels in Jupyter Notebook 2022-10-24 18:05:34 +01:00
a8b101bdae
dataset_predict: add shape_water_desired 2022-10-24 18:05:13 +01:00
587c1dfafa
train_predict: revamp jsonl handling 2022-10-21 16:53:08 +01:00
8195318a42
SparseCategoricalAccuracy: losses → metrics 2022-10-21 16:51:20 +01:00
612735aaae
rename shuffle arg 2022-10-21 16:35:45 +01:00
c98d8d05dd
segmentation: use the right accuracy 2022-10-21 16:17:05 +01:00
bb0258f5cd
flip squeeze operator ordering 2022-10-21 15:38:57 +01:00
af26964c6a
batched_iterator: reset i_item after every time 2022-10-21 15:35:43 +01:00
c5b1501dba
train-predict fixup 2022-10-21 15:27:39 +01:00
42aea7a0cc
plt.close() fixup 2022-10-21 15:23:54 +01:00
12dad3bc87
vis/segmentation: fix titles 2022-10-21 15:22:35 +01:00
0cb2de5d06
train-preedict: close matplotlib after we've finished
they act like file handles
2022-10-21 15:19:31 +01:00
81e53efd9c
PNG: create output dir if doesn't exist 2022-10-21 15:17:39 +01:00
3f7db6fa78
fix embedding confusion 2022-10-21 15:15:59 +01:00
847cd97ec4
fixup 2022-10-21 14:26:58 +01:00
0e814b7e98
Contraster → Segmenter 2022-10-21 14:25:43 +01:00
1b658a1b7c
train-predict: can't destructure array when iterating generator
....it seems to lead to undefined behaviour or something
2022-10-20 19:34:04 +01:00
aed2348a95
train_predict: fixup 2022-10-20 15:42:33 +01:00
cc6679c609
batch data; use generator 2022-10-20 15:22:29 +01:00
d306853c42
use right daataset 2022-10-20 15:16:24 +01:00
59cfa4a89a
basename paths 2022-10-20 15:11:14 +01:00
4d8ae21a45
update cli help text 2022-10-19 17:31:42 +01:00
200076596b
finish train_predict 2022-10-19 17:26:40 +01:00
488f78fca5
pretrain_predict: default to parallel_reads=0 2022-10-19 16:59:45 +01:00
63e909d9fc
datasets: add shuffle=True/False to get_filepaths.
This is important because otherwise it SCAMBLES the filenames, which is a disaster for making predictions in the right order....!
2022-10-19 16:52:07 +01:00
fe43ddfbf9
start implementing driver for train_predict, but not finished yet 2022-10-18 19:37:55 +01:00
b3ea189d37
segmentation: softmax the output 2022-10-13 21:02:57 +01:00
f121bfb981
fixup summaryfile 2022-10-13 17:54:42 +01:00
5c35c0cee4
model_segmentation: document; remove unused args 2022-10-13 17:50:16 +01:00
f12e6ab905
No need for a CLI arg for feature_dim_in - metadata should contain this 2022-10-13 17:37:16 +01:00
e201372252
write quick Jupyter notebook to test data
....I'm paranoid
2022-10-13 17:27:17 +01:00
ae53130e66
layout 2022-10-13 14:54:20 +01:00
6423bf6702
LayerConvNeXtGamma: avoid adding an EagerTensor to config
Very weird how this is a problem when it wasn't before..
2022-10-12 17:12:07 +01:00
32f5200d3b
pass model_arch properly 2022-10-12 16:50:06 +01:00
5933fb1061
fixup 2022-10-11 19:23:41 +01:00
c45b90764e
segmentation: adds xxtiny, but unsure if it's small enough 2022-10-11 19:22:37 +01:00
f4a2c742d9
typo 2022-10-11 19:19:23 +01:00
11f91a7cf4
train: add --arch; default to convnext_i_xtiny 2022-10-11 19:18:01 +01:00
5666c5a0d9
typo 2022-10-10 18:12:51 +01:00
131c0a0a5b
pretrain-predict: create dir if not exists 2022-10-10 18:00:55 +01:00
f883986eaa
Bugfix: modeset to enable TFRecordWriter instead of bare handle 2022-10-06 20:07:59 +01:00
e9a8e2eb57
fixup 2022-10-06 19:23:31 +01:00
9f3ae96894
finish wiring for --water-size 2022-10-06 19:21:50 +01:00
5dac70aa08
typo 2022-10-06 19:17:03 +01:00
2960d3b645
exception → warning 2022-10-06 18:26:40 +01:00
0ee6703c1e
Add todo and comment 2022-10-03 19:06:56 +01:00
2b182214ea
typo 2022-10-03 17:53:10 +01:00
92c380bff5
fiddle with Conv2DTranspose
you need to set the `stride` argument to actually get it to upscale..... :P
2022-10-03 17:51:41 +01:00
d544553800
fixup 2022-10-03 17:33:06 +01:00
058e3b6248
model_segmentation: cast float → int 2022-10-03 17:31:36 +01:00
04e5ae0c45
model_segmentation: redo reshape
much cheese was applied :P
2022-10-03 17:27:52 +01:00
deffe69202
typo 2022-10-03 16:59:36 +01:00
fc6d2dabc9
Upscale first, THEN convnext... 2022-10-03 16:38:43 +01:00
6a0790ff50
convnext_inverse: add returns; change ordering 2022-10-03 16:32:09 +01:00
e51087d0a9
add reshape layer 2022-09-28 18:22:48 +01:00
a336cdee90
and continues 2022-09-28 18:18:10 +01:00
de47a883d9
missing units 2022-09-28 18:17:22 +01:00
b5e08f92fe
the long night continues 2022-09-28 18:14:09 +01:00
dc159ecfdb
and again 2022-09-28 18:11:46 +01:00
4cf0485e32
fixup... again 2022-09-28 18:10:11 +01:00
030d8710b6
fixup 2022-09-28 18:08:31 +01:00
4ee7f2a0d6
add water thresholding 2022-09-28 18:07:26 +01:00
404dc30f08
and again 2022-09-28 17:39:09 +01:00
4cd8fc6ded
segmentation: param name fix 2022-09-28 17:37:42 +01:00
41ba980d69
segmentationP implement dataset parser 2022-09-28 17:19:21 +01:00
d618e6f8d7
pretrain-predict: params.json → metadata.jsonl 2022-09-28 16:35:22 +01:00
e9e6139c7a
typo 2022-09-28 16:28:18 +01:00
3dee3d8908
update cli help 2022-09-28 16:23:47 +01:00
d765b3b14e
fix crash 2022-09-27 18:43:43 +01:00
f4d1d1d77e
just wh 2022-09-27 18:25:45 +01:00
4c24d69ae6
$d → +d 2022-09-27 18:17:07 +01:00
cdb19b4d9f
fixup 2022-09-27 18:13:21 +01:00
c4d3c16873
add some logging 2022-09-27 18:10:58 +01:00
3772c3227e
fixup 2022-09-27 17:57:21 +01:00