Commit graph

606 commits

Author SHA1 Message Date
0129c35a35
LossDice: remove weird K.* functions 2022-12-09 19:06:26 +00:00
659fc97fd4
fix crash 2022-12-09 18:39:27 +00:00
e22c0981e6
actually use dice loss 2022-12-09 18:35:17 +00:00
649c262960
mono: switch loss from crossentropy to dice 2022-12-09 18:13:37 +00:00
7fd7c750d6
jupyter: identity test
status: FAILED, as usual....!
Don't worry though, 'cause we has a *planses*..... MUHAHAHAHAHAHAHA
* cue evil laugh *
2022-12-09 18:07:56 +00:00
cf9e8aa237
jupyter: convnext-mono identity test 2022-12-09 15:50:27 +00:00
2a1772a211
confvnext_intrevse: add shallow 2022-12-08 19:10:12 +00:00
c27869630a
I hate VSCode's git commit interface
it doesn't let you ammend
2022-12-08 18:58:54 +00:00
b3345963f3
missing arg pass 2022-12-08 18:58:32 +00:00
3dde9b69da
fixup 2022-12-08 18:56:32 +00:00
6fce39f696
WHY?!?!?! 2022-12-08 18:55:53 +00:00
26766366fc
I hate the python code intelligence
it's bad
2022-12-08 18:55:15 +00:00
ff56f591c7
I hate python 2022-12-08 18:53:37 +00:00
d37e7224f5
train-mono: tidy up arg passing 2022-12-08 18:47:03 +00:00
b53db648bf
fixup 2022-12-08 18:31:42 +00:00
18c0210704
typo 2022-12-08 17:00:25 +00:00
a3c9416cf0
LossCrossentropy: don't sum 2022-12-08 16:57:11 +00:00
08046340f4
dataset_mono: normalise heightmap 2022-12-08 16:10:34 +00:00
d997157f55
dataset_mono: log when using heightmap 2022-12-06 19:30:11 +00:00
468c150570
slurm-train-mono: add HEIGHTMAP 2022-12-06 19:28:06 +00:00
d0f2e3d730
readfile: do transparent gzip by default
....but there's a glad to turn it off if needed
2022-12-06 19:27:39 +00:00
eac6472c97
Implement support for (optionally) taking a heightmap in 2022-12-06 18:55:58 +00:00
f92b2b3472
according to the equation it looks like it's 2 2022-12-02 17:22:46 +00:00
cad82cd1bc
CBAM: unsure if it's 1 ro 3 dense ayers in the shared mlp 2022-12-02 17:21:13 +00:00
62f6a993bb
implement CBAM, but it's UNTESTED
Convolutional Block Attention Module.
2022-12-02 17:17:45 +00:00
9d666c3b38
train mono: type=int → float 2022-12-01 15:39:44 +00:00
53dfa32685
model_mono: log learning rate 2022-12-01 15:10:51 +00:00
c384d55dff
add arg to adjust learning rate 2022-11-29 20:55:00 +00:00
8e23e9d341
model_segmenter: we're no longer using sparse 2022-11-29 19:28:27 +00:00
9a2b4c6838
dsseg: fix reshape/onehot ordering 2022-11-29 19:28:13 +00:00
df774146d9
dataset_segmenter: reshape, not squeeze 2022-11-29 19:24:54 +00:00
77b8a1a8db
dataset_segmenter: squeeze 2022-11-29 19:16:15 +00:00
2258b5a229
slurm-train: reduce RAM required by 10GB 2022-11-29 19:15:34 +00:00
01101ad30b
losscrossentropy: return the reduced value * facepalm * 2022-11-29 19:07:08 +00:00
ff65393e78
log file naming update 2022-11-29 18:41:14 +00:00
37f196a785
LossCrossentropy: add kwargs 2022-11-29 15:40:35 +00:00
838ff56a3b
mono: fix loading checkpoint 2022-11-29 15:25:11 +00:00
dba6cbffcd
WHY. * facepalms * 2022-11-28 19:33:42 +00:00
57b8eb93fb
fixup 2022-11-28 19:09:35 +00:00
6640a41bb7
almost got it....? it's not what I expected....! 2022-11-28 19:08:50 +00:00
f48473b703
fixup 2022-11-28 19:00:11 +00:00
f6feb125e3
this iss ome serious debugging.
This commit will produce an extremely large volume of output.
2022-11-28 18:57:41 +00:00
09f81b0746
train_mono: debug
this commit will generate a large amount of debug output.
2022-11-28 16:46:17 +00:00
f39e4ade70
LayerConvNextGamma: fix config serialisation bug
.....this is unlikely to be the problem as this bug is in an unused code path.
2022-11-25 21:16:31 +00:00
e7410fb480
train_mono_predict: limit label size to 64x64
that's the size the model predicts
2022-11-25 17:47:17 +00:00
51dd484d13
fixup 2022-11-25 16:55:45 +00:00
884c4eb150
rainfall_stats: formatting again 2022-11-24 19:08:07 +00:00
bfe038086c
rainfall_stats: formatting 2022-11-24 19:07:44 +00:00
7dba03200f
fixup 2022-11-24 19:06:48 +00:00
e5258b9c66
typo 2022-11-24 19:06:13 +00:00