{"componentChunkName":"component---src-templates-blog-post-js","path":"/blog/learning-on-tree-architectures-outperforms-a-convolutional-feedforward-network/","result":{"data":{"site":{"siteMetadata":{"title":"No Frills News"}},"contentfulNfnPost":{"postTitle":"Learning on tree architectures outperforms a convolutional feedforward network","slug":"learning-on-tree-architectures-outperforms-a-convolutional-feedforward-network","createdLocal":"2023-01-31 14:30:59.783696","publishDate":"None","feedName":"Image Recognition","sourceUrl":{"sourceUrl":"https://www.nature.com/articles/s41598-023-27986-6"},"postSummary":{"childMarkdownRemark":{"html":"<p>The learning-rate scheduler for LeNet-5, η = 0.01, 0.005, 0.001 for epochs = [0, 100), [100, 150), [150, 200], respectively.\nFor Tree-3 (K = 15, M = 80) and 10 Tree-3 (K = 15, M = 80), η decays by a factor of 0.6 every 20 epochs.\nThe learning rate scheduler was the same as for Tree-3 (K = 15, M = 16), on the CIFAR-10 dataset.\nThe gray squares in the first layer represent convolutional hidden units, ({\\sigma }<em>{Conv}), and max-pooling hidden units that are equal zero, except several denoted by RGB dots.\nThe non-zero tree output hidden units, ({\\sigma }</em>{Tree}), are denoted by black dots.</p>"}}}},"pageContext":{"slug":"learning-on-tree-architectures-outperforms-a-convolutional-feedforward-network"}}}