Skip to content

Commit

Permalink
fix navigation (and rerun eveyrthing with latest nimib)
Browse files Browse the repository at this point in the history
  • Loading branch information
pietroppeter committed May 7, 2021
1 parent 5fab60c commit 1bf9d75
Show file tree
Hide file tree
Showing 10 changed files with 273 additions and 193 deletions.
Binary file added chessboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
136 changes: 85 additions & 51 deletions drafts/arraymancer_tutorial.html
Original file line number Diff line number Diff line change
@@ -1,16 +1,43 @@
<!DOCTYPE html>
<html lang="en-us">
<head>
<title>draft_arraymancer_tutorial.nim</title>
<title>drafts\arraymancer_tutorial.nim</title>
<link rel="icon" href="data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2280%22>🐳</text></svg>">
<meta content="text/html; charset=utf-8" http-equiv="content-type">
<meta content="width=device-width, initial-scale=1" name="viewport">
<link rel='stylesheet' href='https://unpkg.com/normalize.css/'>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/kognise/water.css@latest/dist/light.min.css">
<link rel='stylesheet' href='https://cdn.jsdelivr.net/gh/pietroppeter/nimib/assets/atom-one-light.css'>
</head>
<style>
.nb-box {
display: flex;
align-items: center;
justify-content: space-between;
}
.nb-small {
font-size: 0.8rem;
}
button.nb-small {
float: right;
padding: 2px;
padding-right: 5px;
padding-left: 5px;
}
section#source {
display:none
}
</style>

</head>
<body>
<main>
<header>
<div class="nb-box">
<span><a href="..">🏡</a></span>
<span><code>drafts\arraymancer_tutorial.nim</code></span>
<span><a href="https://github.com/pietroppeter/nblog"><svg aria-hidden="true" width="1.2em" height="1.2em" style="vertical-align: middle;" preserveAspectRatio="xMidYMid meet" viewBox="0 0 16 16"><path fill-rule="evenodd" d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59c.4.07.55-.17.55-.38c0-.19-.01-.82-.01-1.49c-2.01.37-2.53-.49-2.69-.94c-.09-.23-.48-.94-.82-1.13c-.28-.15-.68-.52-.01-.53c.63-.01 1.08.58 1.23.82c.72 1.21 1.87.87 2.33.66c.07-.52.28-.87.51-1.07c-1.78-.2-3.64-.89-3.64-3.95c0-.87.31-1.59.82-2.15c-.08-.2-.36-1.02.08-2.12c0 0 .67-.21 2.2.82c.64-.18 1.32-.27 2-.27c.68 0 1.36.09 2 .27c1.53-1.04 2.2-.82 2.2-.82c.44 1.1.16 1.92.08 2.12c.51.56.82 1.27.82 2.15c0 3.07-1.87 3.75-3.65 3.95c.29.25.54.73.54 1.48c0 1.07-.01 1.93-.01 2.2c0 .21.15.46.55.38A8.013 8.013 0 0 0 16 8c0-4.42-3.58-8-8-8z" fill="#000"></path></svg></a></span>
</div>
<hr>
</header><main>
<h1>Arraymancer Tutorial - First steps</h1>
<blockquote>
<p>A remake of the original tutorial using nimib: <a href="https://mratsim.github.io/Arraymancer/tuto.first_steps.html">https://mratsim.github.io/Arraymancer/tuto.first_steps.html</a></p>
Expand All @@ -32,9 +59,11 @@ <h2>Tensor properties</h2>

<span class="hljs-keyword">let</span> d = [[<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>], [<span class="hljs-number">4</span>, <span class="hljs-number">5</span>, <span class="hljs-number">6</span>]].toTensor()
<span class="hljs-keyword">echo</span> d</code></pre>
<pre><samp>Tensor[system.int] of shape [2, 3]" on backend "Cpu"
<pre><samp>Tensor[system.int] of shape [2, 3]&quot; on backend &quot;Cpu&quot;
|1 2 3|
|4 5 6|</samp></pre>
|4 5 6|

</samp></pre>
<blockquote>
<p>message changed, it was: <code>Tensor of shape 2x3 of type &quot;int&quot; on backend &quot;Cpu&quot;</code></p>
</blockquote>
Expand All @@ -45,7 +74,8 @@ <h2>Tensor properties</h2>
<pre><samp>d.rank = 2
d.shape = [2, 3]
d.strides = [3, 1]
d.offset = 0</samp></pre>
d.offset = 0
</samp></pre>
<blockquote>
<p>echo of shape and strides changed (dropped @)</p>
</blockquote>
Expand All @@ -57,9 +87,11 @@ <h2>Tensor creation</h2>
[[<span class="hljs-number">111</span>, <span class="hljs-number">222</span>, <span class="hljs-number">333</span>], [<span class="hljs-number">444</span>, <span class="hljs-number">555</span>, <span class="hljs-number">666</span>]],
[[<span class="hljs-number">1111</span>, <span class="hljs-number">2222</span>, <span class="hljs-number">3333</span>], [<span class="hljs-number">4444</span>, <span class="hljs-number">5555</span>, <span class="hljs-number">6666</span>]]].toTensor()
<span class="hljs-keyword">echo</span> c</code></pre>
<pre><samp>Tensor[system.int] of shape [4, 2, 3]" on backend "Cpu"
<pre><samp>Tensor[system.int] of shape [4, 2, 3]&quot; on backend &quot;Cpu&quot;
| | 1 2 3 | 11 22 33 | 111 222 333 | 1111 2222 3333|
| | 4 5 6 | 44 55 66 | 444 555 666 | 4444 5555 6666|</samp></pre>
| | 4 5 6 | 44 55 66 | 444 555 666 | 4444 5555 6666|

</samp></pre>
<blockquote>
<p>I am not sure where the additional pipes come from, maybe a bug?</p>
</blockquote>
Expand All @@ -71,55 +103,71 @@ <h2>Tensor creation</h2>
tensor of the same shape but filled with 0 and 1 respectively.</p>
<pre><code class="nim hljs"><span class="hljs-keyword">let</span> e = newTensor[<span class="hljs-built_in">bool</span>]([<span class="hljs-number">2</span>, <span class="hljs-number">3</span>])
dump e</code></pre>
<pre><samp>e = Tensor[system.bool] of shape [2, 3]" on backend "Cpu"
<pre><samp>e = Tensor[system.bool] of shape [2, 3]&quot; on backend &quot;Cpu&quot;
|false false false|
|false false false|</samp></pre>
|false false false|

</samp></pre>
<pre><code class="nim hljs"><span class="hljs-keyword">let</span> f = zeros[<span class="hljs-built_in">float</span>]([<span class="hljs-number">4</span>, <span class="hljs-number">3</span>])
dump f</code></pre>
<pre><samp>f = Tensor[system.float] of shape [4, 3]" on backend "Cpu"
<pre><samp>f = Tensor[system.float] of shape [4, 3]&quot; on backend &quot;Cpu&quot;
|0.0 0.0 0.0|
|0.0 0.0 0.0|
|0.0 0.0 0.0|
|0.0 0.0 0.0|
|0.0 0.0 0.0|</samp></pre>

</samp></pre>
<pre><code class="nim hljs"><span class="hljs-keyword">let</span> g = ones[<span class="hljs-built_in">float</span>]([<span class="hljs-number">4</span>, <span class="hljs-number">3</span>])
dump g</code></pre>
<pre><samp>g = Tensor[system.float] of shape [4, 3]" on backend "Cpu"
<pre><samp>g = Tensor[system.float] of shape [4, 3]&quot; on backend &quot;Cpu&quot;
|1.0 1.0 1.0|
|1.0 1.0 1.0|
|1.0 1.0 1.0|
|1.0 1.0 1.0|
|1.0 1.0 1.0|</samp></pre>

</samp></pre>
<pre><code class="nim hljs"><span class="hljs-keyword">let</span> tmp = [[<span class="hljs-number">1</span>, <span class="hljs-number">2</span>], [<span class="hljs-number">3</span>, <span class="hljs-number">4</span>]].toTensor()
<span class="hljs-keyword">let</span> h = tmp.zeros_like
dump h</code></pre>
<pre><samp>h = Tensor[system.int] of shape [2, 2]" on backend "Cpu"
<pre><samp>h = Tensor[system.int] of shape [2, 2]&quot; on backend &quot;Cpu&quot;
|0 0|
|0 0|</samp></pre>
|0 0|

</samp></pre>
<pre><code class="nim hljs"><span class="hljs-keyword">let</span> i = tmp.ones_like
dump i</code></pre>
<pre><samp>i = Tensor[system.int] of shape [2, 2]" on backend "Cpu"
<pre><samp>i = Tensor[system.int] of shape [2, 2]&quot; on backend &quot;Cpu&quot;
|1 1|
|1 1|</samp></pre>
|1 1|

</samp></pre>
<h2>Accessing and modifying a value</h2>
<p>Tensors value can be retrieved or set with array brackets.</p>
<pre><code class="nim hljs"><span class="hljs-keyword">var</span> a = toSeq(<span class="hljs-number">1</span> .. <span class="hljs-number">24</span>).toTensor().reshape(<span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">4</span>)
<span class="hljs-keyword">echo</span> a</code></pre>
<pre><samp>Tensor[system.int] of shape [2, 3, 4]" on backend "Cpu"
<pre><samp>Tensor[system.int] of shape [2, 3, 4]&quot; on backend &quot;Cpu&quot;
| | 1 2 3 4 | 13 14 15 16|
| | 5 6 7 8 | 17 18 19 20|
| | 9 10 11 12 | 21 22 23 24|</samp></pre>
| | 9 10 11 12 | 21 22 23 24|

</samp></pre>
<pre><code class="nim hljs">dump a[<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>]
<span class="hljs-keyword">echo</span> a</code></pre>
<pre><samp>a[1, 1, 1] = 18
Tensor[system.int] of shape [2, 3, 4]" on backend "Cpu"
Tensor[system.int] of shape [2, 3, 4]&quot; on backend &quot;Cpu&quot;
| | 1 2 3 4 | 13 14 15 16|
| | 5 6 7 8 | 17 18 19 20|
| | 9 10 11 12 | 21 22 23 24|</samp></pre>
| | 9 10 11 12 | 21 22 23 24|

</samp></pre>
<pre><code class="nim hljs">a[<span class="hljs-number">1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">1</span>] = <span class="hljs-number">999</span>
<span class="hljs-keyword">echo</span> a</code></pre>
<pre><samp>Tensor[system.int] of shape [2, 3, 4]" on backend "Cpu"
<pre><samp>Tensor[system.int] of shape [2, 3, 4]&quot; on backend &quot;Cpu&quot;
| | 1 2 3 4 | 13 14 15 16|
| | 5 6 7 8 | 17 999 19 20|
| | 9 10 11 12 | 21 22 23 24|</samp></pre>
| | 9 10 11 12 | 21 22 23 24|

</samp></pre>
<h2>Copying</h2>
<p>Warning ⚠: When you do the following, both tensors <code>a</code> and <code>b</code> will share data.
Full copy must be explicitly requested via the <code>clone</code> function.</p>
Expand All @@ -137,19 +185,26 @@ <h2>Copying</h2>
dump a[<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>]</code></pre>
<pre><samp>a[1, 0, 0] = 13
a[1, 0, 0] = 13
a[1, 0, 0] = 0</samp></pre>
a[1, 0, 0] = 0
</samp></pre>
<p>This behaviour is the same as Numpy and Julia,
reasons can be found in the following
<a href="https://mratsim.github.io/Arraymancer/uth.copy_semantics.html">under the hood article</a>.</p>

</main>
<footer>
<hr>
<span id="made">made with <a href="https://github.com/pietroppeter/nimib">nimib 🐳</a></span>
<button id="show" onclick="toggleSourceDisplay()">Show Source</button>
<div class="nb-box">
<span><span class="nb-small">made with <a href="https://pietroppeter.github.io/nimib/">nimib 🐳</a></span></span>
<span></span>
<span><button class="nb-small" id="show" onclick="toggleSourceDisplay()">Show Source</button></span>
</div>
</footer>
<section id="source">
<pre><code class="nim hljs"><span class="hljs-keyword">import</span> nimib

<span class="hljs-comment"># I want to use this notebook also to show how one can customzie the nbCode block output</span>
<span class="hljs-comment"># (to have output shown as comments) and also possibly to stitch together subsequent code samples</span>
<span class="hljs-comment"># (I should use a render change in nbDoc). Probably I should do this after rendering refactoring.</span>
nbInit
nbText: <span class="hljs-string">&quot;&quot;&quot;
# Arraymancer Tutorial - First steps
Expand Down Expand Up @@ -284,8 +339,7 @@ <h2>Copying</h2>
[under the hood article](https://mratsim.github.io/Arraymancer/uth.copy_semantics.html).
&quot;&quot;&quot;</span>
nbShow</code></pre>
</section>
<script>
</section><script>
function toggleSourceDisplay() {
var btn = document.getElementById("show")
var source = document.getElementById("source");
Expand All @@ -297,25 +351,5 @@ <h2>Copying</h2>
source.style.display = "none";
}
}
</script>
<style>
span#made {
font-size: 0.8rem;
}
button#show {
font-size: 0.8rem;
}

button#show {
float: right;
padding: 2px;
padding-right: 5px;
padding-left: 5px;
}
section#source {
display:none
}
</style>
</footer>
</body>
</script></body>
</html>
25 changes: 15 additions & 10 deletions drafts/bernoulli_and_beyond.html
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<!DOCTYPE html>
<html lang="en-us">
<head>
<title>bernoulli_and_beyond.nim</title>
<title>drafts\bernoulli_and_beyond.nim</title>
<link rel="icon" href="data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 viewBox=%220 0 100 100%22><text y=%22.9em%22 font-size=%2280%22>🐳</text></svg>">
<meta content="text/html; charset=utf-8" http-equiv="content-type">
<meta content="width=device-width, initial-scale=1" name="viewport">
Expand Down Expand Up @@ -30,13 +30,12 @@
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.css" integrity="sha384-AfEj0r4/OFrOo5t7NnNe46zW/tFgW6x/bCJG8FqQCEo3+Aro6EYUG4+cU+KJWu/X" crossorigin="anonymous">
<script defer src="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.js" integrity="sha384-g7c+Jr9ZivxKLnZTDUhnkOnsh30B4H0rpLUpJ4jAIKs4fnJI+sEnkvrMWph2EDg4" crossorigin="anonymous"></script>
<script defer src="https://cdn.jsdelivr.net/npm/[email protected]/dist/contrib/auto-render.min.js" integrity="sha384-mll67QQFJfxn0IYznZYonOWZ644AWYC+Pt2cHqMaRhXVrursRwvLnLaebdGIlYNa" crossorigin="anonymous" onload="renderMathInElement(document.body,{delimiters:[{left: '$$', right: '$$', display: true},{left: '$', right: '$', display: false}]});"></script>
<script async defer data-domain="pietroppeter.github.io/nblog" src="https://plausible.io/js/plausible.js"></script>
</head>
<body>
<header>
<div class="nb-box">
<span><a href=".">🏡</a></span>
<span><code>bernoulli_and_beyond.nim</code></span>
<span><a href="..">🏡</a></span>
<span><code>drafts\bernoulli_and_beyond.nim</code></span>
<span><a href="https://github.com/pietroppeter/nblog"><svg aria-hidden="true" width="1.2em" height="1.2em" style="vertical-align: middle;" preserveAspectRatio="xMidYMid meet" viewBox="0 0 16 16"><path fill-rule="evenodd" d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59c.4.07.55-.17.55-.38c0-.19-.01-.82-.01-1.49c-2.01.37-2.53-.49-2.69-.94c-.09-.23-.48-.94-.82-1.13c-.28-.15-.68-.52-.01-.53c.63-.01 1.08.58 1.23.82c.72 1.21 1.87.87 2.33.66c.07-.52.28-.87.51-1.07c-1.78-.2-3.64-.89-3.64-3.95c0-.87.31-1.59.82-2.15c-.08-.2-.36-1.02.08-2.12c0 0 .67-.21 2.2.82c.64-.18 1.32-.27 2-.27c.68 0 1.36.09 2 .27c1.53-1.04 2.2-.82 2.2-.82c.44 1.1.16 1.92.08 2.12c.51.56.82 1.27.82 2.15c0 3.07-1.87 3.75-3.65 3.95c.29.25.54.73.54 1.48c0 1.07-.01 1.93-.01 2.2c0 .21.15.46.55.38A8.013 8.013 0 0 0 16 8c0-4.42-3.58-8-8-8z" fill="#000"></path></svg></a></span>
</div>
<hr>
Expand Down Expand Up @@ -82,7 +81,8 @@ <h3>Bernoulli distribution</h3>
<span class="hljs-keyword">let</span> s = take(rng, b, <span class="hljs-number">10</span>)
<span class="hljs-keyword">echo</span> s</code></pre>
<pre><samp>1.0
@[1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0]</samp></pre>
@[1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0]
</samp></pre>
<p>We can also compute the mean ($p$), variance ($p(1-p)$) and standard deviation ($\sqrt(p (1 - p))$)</p>
<pre><code class="nim hljs"><span class="hljs-keyword">echo</span> rng.mean b
<span class="hljs-keyword">echo</span> rng.variance b
Expand All @@ -93,19 +93,22 @@ <h3>Bernoulli distribution</h3>
0.08999999999999998
0.08999999999999998
0.3
0.3</samp></pre>
0.3
</samp></pre>
<h3>Choice distribution</h3>
<p>A categorical distribution that is already implemented in alea is the one where the categories have equal probability:</p>
<pre><code class="nim hljs"><span class="hljs-keyword">let</span> dice = choice(@[<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">4</span>, <span class="hljs-number">5</span>, <span class="hljs-number">6</span>])
<span class="hljs-keyword">block</span>:
<span class="hljs-keyword">let</span> s = take(rng, dice, <span class="hljs-number">10</span>)
<span class="hljs-keyword">echo</span> s</code></pre>
<pre><samp>@[4, 4, 6, 1, 6, 5, 5, 5, 1, 3]</samp></pre>
<pre><samp>@[4, 4, 6, 1, 6, 5, 5, 5, 1, 3]
</samp></pre>
<p>Note that the <code>mean</code> will not work for <code>dice</code> since it is implemented only for <code>RandomVar[float]</code>.</p>
<p>In order to have a mean I could do like this:</p>
<pre><code class="nim hljs"><span class="hljs-keyword">let</span> fdice = choice(@[<span class="hljs-number">1.0</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">4</span>, <span class="hljs-number">5</span>, <span class="hljs-number">6</span>])
<span class="hljs-keyword">echo</span> rng.mean fdice</code></pre>
<pre><samp>3.5</samp></pre>
<pre><samp>3.5
</samp></pre>
<h3>Categorical distribution</h3>
<p>Finally, to <a href="https://github.com/andreaferretti/alea#defining-custom-distributions">define a custom distribution</a>
I just need to define:</p>
Expand Down Expand Up @@ -137,7 +140,8 @@ <h3>Categorical distribution</h3>
<p>and we can already use this to create a biased dice, for example one that has double chance to output an odd number than an even number:</p>
<pre><code class="nim hljs"><span class="hljs-keyword">let</span> oddDice = categorical(@[(<span class="hljs-number">1.0</span>, <span class="hljs-number">0.2</span>), (<span class="hljs-number">2.0</span>, <span class="hljs-number">0.1</span>), (<span class="hljs-number">3.0</span>, <span class="hljs-number">0.2</span>), (<span class="hljs-number">4.0</span>, <span class="hljs-number">0.1</span>),
(<span class="hljs-number">5.0</span>, <span class="hljs-number">0.2</span>), (<span class="hljs-number">6.0</span>, <span class="hljs-number">0.1</span>)])</code></pre>
<pre><samp>probabilities do not add to 1.0, they will be normalized to sum to 1</samp></pre>
<pre><samp>probabilities do not add to 1.0, they will be normalized to sum to 1
</samp></pre>
<p>Now to close the loop I need to implement the <code>sample</code> proc:</p>
<pre><code class="nim hljs"><span class="hljs-keyword">proc</span> sample*(rng: <span class="hljs-keyword">var</span> <span class="hljs-type">Random</span>; c: <span class="hljs-type">Categorical</span>): <span class="hljs-built_in">float</span> =
<span class="hljs-keyword">var</span> tot = <span class="hljs-number">0.0</span>
Expand All @@ -153,7 +157,8 @@ <h3>Categorical distribution</h3>
<span class="hljs-keyword">echo</span> s
<span class="hljs-keyword">echo</span> rng.mean(oddDice)</code></pre>
<pre><samp>@[1.0, 1.0, 1.0, 3.0, 5.0, 1.0, 1.0, 6.0, 5.0, 3.0, 5.0, 4.0, 3.0, 3.0, 3.0, 5.0, 4.0, 5.0, 1.0, 1.0]
3.34143</samp></pre>
3.34143
</samp></pre>
<p>And the output seems reasonable (I could probably overload the mean and try to use it to compute an exact mean.</p>
<h2>Conclusions</h2>
<p>It was rather straightforward to implement a categorical distribution in alea.
Expand Down
Loading

0 comments on commit 1bf9d75

Please sign in to comment.