2,225,852 events, 1,311,601 push events, 1,764,413 commit messages, 105,395,043 characters
. Hello, World! . ===================================================================| . | | . | Four score and seven years ago our fathers brought forth on this | . | continent, a new nation, conceived in Liberty, and dedicated to | . | the proposition that all men are created equal. | . | | . | Now we are engaged in a great civil war, testing whether that | . | nation, or any nation so conceived and so dedicated, can long | . | endure. We are met on a great battle-field of that war. We have | . | come to dedicate a portion of that field, as a final resting | . | place for those who here gave their lives that that nation might | . | live. It is altogether fitting and proper that we should do this. | . | | . | But, in a larger sense, we can not dedicate-we can not consecrate-| . | we can not hallow-this ground. The brave men, living and dead, | . | who struggled here, have consecrated it, far above our poor power | . | to add or detract. The world will little note, nor long remember | . | what we say here, but it can never forget what they did here. It | . | is for us the living, rather, to be dedicated here to the | . | unfinished work which they who fought here have thus far so nobly | . | advanced. It is rather for us to be here dedicated to the great | . | task remaining before us-that from these honored dead we take | . | increased devotion to that cause for which they gave the last | . | full measure of devotion-that we here highly resolve that these | . | dead shall not have died in vain-that this nation, under God, | . | shall have a new birth of freedom-and that government of the | . | people, by the people, for the people, shall not perish from the | . | earth. | . | | . | | . |===================================================================|
Add [python-setup].experimental_lockfile
to consume lockfiles (#12316)
This allows you to use the new lockfile format, generated by pip-tools via ./pants --tag=-lockfile_ignore lock ::
and pantsbuild/pants#12300.
A lockfile cannot be used at the same time as a constraints file. This makes the code easier to implement and means that we don't break any prior APIs. We will likely deprecate constraints when the dust settles.
There are several major deficiencies:
- Only
pex_from_targets.py
consumes this lockfile. This means that tool lockfiles will now have no constraints and no lockfile, for now. - Does not handle requirements disjoint to the lockfile.
- Does not support multiple user lockfiles, which, for example, is necessary to properly handle
platforms
withpex_binary
andpython_awslambda
: we need one lockfile per platform, as demonstrated in https://github.com/Eric-Arellano/lockfile-platforms-problem/tree/main/multiple_pex__stu_proposal.
We're currently using pip's --constraints
file support, which allows you to specify constraints that may not actually be used. At the same time, we default to [python-setup].resolve_all_constraints
, which does first install the entire constraints file, and then uses Pex's repository PEX feature to extract the relevant subset. This is generally a performance optimization, but there are some times --resolve-all-constraints
is not desirable:
- It is not safe to first install the superset, and you can only install the proper subset. This especially can happen when
platforms
are used. See pantsbuild/pants#12222.- We proactively disable
--resolve-all-constraints
whenplatforms
are used.
- We proactively disable
- User does not like the performance tradeoff, e.g. because they have a huge repository PEX so it's slow to access.
--
In contrast, this PR stops using --constraints
and roughly always does [python-setup].resolve_all_constraints
(we now run pex -r requirements.txt --no-transitive
and use repository PEXes). Multiple user lockfiles will allow us to solve the above issues:
- It is not safe to first install the superset, and you can only install the proper subset.
We'll have a distinct lockfile for each platform
, which avoids this situation. See https://github.com/Eric-Arellano/lockfile-platforms-problem/tree/main/multiple_pex__stu_proposal for an example.
- User does not like the performance tradeoff
They can use multiple lockfiles to work around this.
--
Always using [python-setup].resolve_all_constraints
reduces complexity: less code to support, fewer concepts for users to learn.
Likewise, if we did still want to use --constraints
, we would also need to upgrade Pex to use Pip 21+, which gained support for URL constraints. We hacked around URL constraints before, but that isn't robust. However, Pip 21+ drops Python 2 and 3.5 support: we'd need to release Pex 3 w/o Py2 support, and upgrade Pants to have workarounds that allow Py2 to still be used. To avoid project creep, it's better to punt on Pex 3.
[ci skip-rust] [ci skip-build-wheels]
Airlock Widening
You know, you never really appreciate how XL station miners are until you realize that those swole explorers come back with gigantic crates of ore in tow. You can't blame a mapper for forgetting that doorways for mining need to be extra THICC, that's with 3 C's mind you because 1-wide doorways just don't do. Even if they look better. Looks only dictate so much, and while we need to keep worrying about icebox aesthetics (it's really not an aesthetic station, time hasn't done it well) we can't let aesthetics get in the way of design convention. And when your mapwork causes air alarms to go off left right and center, you know you've done something wrong. I want it to be 1-wide, oh I do, but Miners won't like that the firelocks trap them in their icy igloo. Plus, I remapped all this to PREVENT this exact scenario, and I had so much damn fun with it that I almost put it back in. Thank god for excessive review.
Introduce new boot flow to handle SAR 2SI
The existing method for handling legacy SAR is:
- Mount /sbin tmpfs overlay
- Dump all patched/new files into /sbin
- Magic mount root dir and re-exec patched stock init
With Android 11 removing the /sbin folder, it is quite obvious that things completely break down right in step 1.
To overcome this issue, we have to find a way to swap out the init binary AFTER we re-exec stock init. This is where 2SI comes to rescue!
2SI normal boot procedure is: 1st stage -> Load sepolicy -> 2nd stage -> boot continue...
2SI Magisk boot procedure is: MagiskInit 1st stage -> Stock 1st stage -> MagiskInit 2nd Stage -> -> Stock init load sepolicy -> Stock 2nd stage -> boot continue...
As you can see, the trick is to make stock 1st stage init re-exec back into MagiskInit so we can do our setup. This is possible by manipulating some ramdisk files on initramfs based 2SI devices (old ass non SAR devices AND super modern devices like Pixel 3/4), but not possible on device that are stuck using legacy SAR (device that are not that modern but not too old, like Pixel 1/2. Fucking Google logic!!)
This commit introduces a new way to intercept stock init re-exec flow: ptrace init with forked tracer, monitor PTRACE_EVENT_EXEC, then swap out the init file with bind mounts right before execv returns!
Going through this flow however will lose some necessary backup files, so some bookkeeping has to be done by making the tracer hold these files in memory and act as a daemon. 2nd stage MagiskInit will ack the daemon to release these files at the correct time.
It just works™ ¯_(ツ)_/¯
"9:15am. I went to bed early and had time to think. Forget LNNs. I was euphoric yesterday about real valued logic being connected to NNs. It is true that this would make things explainable on a small scale. I could make a hand crafted player with a few dozen parameters. But what about the other million? Would I be able to interpret that as well? The autotrained stuff in the middle layers?
Of course not. And furthermore, I do not have any better plans than providing a few extra features. What am I going to with OR and IMPLIES?
Ultimately, I am just hiding from the fact that I've failed and looking for outs.
9:25am. If I went the LNN route it would still be yet another hacky method for me to try. It would be another few months down the drain. Gary Marcus said that a good check is: Can I try it out myself? Is it roboust?
The story is pretty good, but it is a red flag that there is still no code for this out there.
If my goal was to do first order logic via neural nets then there would be no contest - I would have to give this a try. But it is not. I want to make a poker agent.
9:30am. Ultimately what I really have to do is change my target from hacks, to modeling the other player.
And I should not forget - the simple RL approach failed and there is not much I can do to turn this ship around. I started thinking I could win, but that is not true.
Forget winning. I am in a spiral into the Abyss. The most I can do is learn to control randomness and master probabilistic programming. This is the true framework. Every time I stray the course I end up being punched in the face. You'd think I'd learn by now to ignore things that lead me away from rational reasoning in machines.
I've been convinced of the benefits of Bayesian reasoning years ago when I went through the Prob Mods book. All the other paths lead to even surer ruin.
What I should do now is try the approach of modeling the other guy. Make probabilistic programs. Try to extract efficiencies that way.
If I went down the LNN route, could I make simulators and things of that sort? No, I'd be tied up in expressing things in first order logic. Whereas probabilistic programming make it possible to bring all my general programming technique to bear upon the problem.
9:35am. Forget hacking addressing and nested inference for the time being. Forget categorical RL and hacks of that sort to make the net work better.
If I am going to get significant efficiency gains, it won't be by scouring the literature. I need 1,000x and more in efficiency. I won't get that by hacking the way the rewards are distributed, but by fundamentally changing the model in ways that regular deep learning won't allow me to.
9:40am. I need to do the most straightforward thing here. I've tried everything except the sane thing.
I need to supress my skepticism and stop looking for outs. Instead just get a PPL, and do what is needed using it. Forget trying to hack things.
Don't target the rewards. Target the other guy's hand. This is the thing I haven't tried and LNNs won't allow me it.
I need the full power of PPLs to tackle this problem.
9:50am. I can't build things into the NN directly which the LNN approach would require me to take.
Instead what I can do is build simulators.
I can retrace my steps and take control of the randomness that I had before and optimize that together. Remember how in the old thing I had to sample actions because they were too many of them to express them through a single catty distribution? Or the other sources of randomness like hand selection.
I need to implement the whole system through the lns of PPLs. That will allow me to do inference backwards through the simulator. This is my only hope for making a significant improvement.
When I was doing DL, my focus was just on shoving data into it. But with PPLs I can take a step back and consider the whole system as a single entity.
This is my only hope for winning. LNNs would not allow me to take this approach. They wouldn't be more efficient than just feeding a regular net directly.
9:55am. It is not like I am really trying to do anything fundamentally new.
Rather what I need to do is take my old programs and draw out their possibilities in ways that deep learning does not allow me to. I've gotten lead astray by my doubts.
One thing I should hope for is that predicting the opponent's hand and using that to calculate EV instead of just feeding the rewards will be a better learning objective.
Yeah, I do not think I was wrong for not trying out GANs during the last run. But the approach might become viable if I restructure it fundamentally.
It is true that there are things I have not tried, but none of those things are in the standard deep learning framework.
10am. Ok, now let me finish reading the Urasekai chapter (I've been stuck on it for 40m) and I will start.
I'll resume the Pyro tutorials. This is what I need to focus on. I need to get back to studying PPLs.
A few weeks ago I was applying to jobs. I can still do that in the future. But for the next few months or longer, let me just focus on this. Forget deep learning and master probabilistic programming. That will finally put my ML skills on a firm foundation.
Probabilistic programming is the last thing I am going to try. There is nothing beyond it.
But before I've fully exhausted those possibilities, I should believe in it and strive to go forward. The thread is not broken and I am yet to pay my respects to rational reasoning.
If it is a year or two it does not matter. There will be plenty of time to boring paid work in the future if I fail.
10:10am. http://pyro.ai/examples/svi_part_iv.html
I finished this last time. Let me take a look at it again and then I'll go through the rest of examples.
...Yeah, forget about ever feeding the rewards. How about I make that a design constraint? Instead of doing the RL thing, I should aim to build a simulator and calculate the EV of various actions based on the estimate of what the opponent is holding.
I should model what the opponent is holding, how aggressive he is and how likely he is to fold and then use that to determine the course of action. I won't ever try just feeding the rewards to an RL system ever again.
This is justice. It is the right thing to do.
A few years to do this should be enough. I'll leave the speedruns for some other life.
10:15am. http://pyro.ai/examples/bayesian_regression.html
Ok, let me read this. I should focus on going through the Pyro and Gen tutorials for now. After all that is done I'll ease myself into doing programming work.
10:30am. Let me take a break, I do not feel like reading this right now. I've been burning my brain yesterday and this morning trying to decide on a course of action.
Forget Pyro. If I am going to be doing simulators, no way will Python be fast enough. I have to do it in Julia.
I need to reafirm that goal. I am bored to death of reading things by this point. I need to get back to Julia and do some actual programming in it.
I should take a look at the rest of Z's examples and then play with the Gen and Turing libraries. I should look at how RL is done in Julia.
10:35am. Right now let me take a break. Since I skipped dinner yesterday I might as well have early breakfast here.
...Hmmm, now that I think about it, the most basic essence of proof is that for a proposed plan or sequence of actions that one be able to imagine a course of action that would be beneficial. Being able to predict the future is a proof technique.
Current RL systems and what I was doing was missing that. Whenever one is wrong, one gets punished. The beatings will continue until my policy improves. I did gain the ability to write games and simulators from my long days of practice. I should put it to good use."
Create MBBS in Russia | Admission Open 2021 | Fees | Twinkle Institute AB
Russia is the USA recognised for its excessive fantastic of clinical education. It is extra famous for the lower-priced MBBS prices admission in Russian scientific universities. Out of a hundred, there are thirty world’s top-ranking faculties and that are Russian clinical faculties to find out about MBBS In Russia, which suggests that Russia holds one of the fine scientific universities for pursuing MBBS.
study MBBS in russia is a pleasant alternative for college students who have a dream of turning into doctors. Amongst all, Russia is the most fantastic desire due to the fact of its fine clinical colleges, skilled schools and additionally low-priced price shape which is some distance much less as in contrast to different international locations MBBS colleges. Also, Student touring Russia from India have Russian scientific universities that are identified with the aid of MCI so that they can exercise medication in their domestic united states after returning which makes it an alternative really worth thinking about to find out about Medicine In Russia. As mentioned, the educational 12 months in Russian scientific universities begin in the month of September, and there are two semesters in every educational year. Students get holidays twice a year, i.e., summertime holidays and winters vacations. It will take nearly one month for the complete admission technique to get enrol for MBBS in Russia. So observe now and get one step nearer to your goals by way of taking admission to MBBS in Russia.
During MBBS Admission In Russia, each scholar has a notion that they do now not choose themselves to be enrolled in a university or university, which is now not supplying them with terrific scientific education. That’s why we supply college students with the exception of all universities and the identical standards get fulfilled here. As we have already talked about it, all the Best Medical College In Russia are assuring college students to grant fantastic training so till and except they will now not fulfil the same, they will no longer cross further.
It is pretty fascinating to notice that when a pupil is journeying to learn about MBBS in Russia, they will have lots of advantages. We have already cited the several benefits of a character that he will be going to have. But one amongst all these, the essential benefit they will be going to get supplied is the magnificent education. When they are pursuing MBBS, the colleges make certain that now not even a single issue will be overlooked whilst they had been teaching. They will be in a position to attain expertise about standards each insensible and theory. The colleges educating are so superior that they do no longer think about themselves to be God. They will think about themselves as your pals and clear all your queries.
The professors are conscious of each languages English and Russian languages so they instruct accordingly. It is now not obligatory for a pupil to study any precise language due to the fact college students are guided in the English language and the whole lot will work in accordance to them, additionally specialists can effortlessly engage with students.
Also, when you are touring StudyMedicine In Russia, a protection test is up to the mark, and one ought to by no means doubt about it as truth that the Russian Embassy is being very worried about it. They will no longer compromise with the protection system. If in case whatever is troubling you, they can take strict motion towards them. MBBS in Russia is no longer solely well-known for the satisfactory excellent training however additionally for the well-known safety requirements supplied to the students.
The safety of the pupil is one of the duties of a University. If you are in Russia and reading at a university, no one will dare to damage you in any case. If case trouble arises, there will be strict motion taken towards them.
Some of the primary variations are handy if we examine each MBBS in Russia and MBBS in India. Apart from all the facts, college students want to recognize that the entire technique to comply with for MBBS admission in Russia is pretty easygoing as in contrast to India. There is no want for them to go right here and there. The solely required element is college students want to qualify for the NEET-UG examination, and they will be capable to take admission to Russian scientific college easily. The trouble that arises in India with each and every pupil is that he has cleared the NEET-UG examination, however, the rating is now not up to that level, so they want to go for non-public Institutions for MBBS which needs a bulk of donations. It feels very heavy in the pockets of each and every Indian household and it grew to be pretty challenging for them to manage to pay for the total fees.
With the help of Twinkle Institute AB, You can get Direct Admission to Study MBBS Abroad in Russia. Visit our website to proceed with your Admission in MBBS 2020 today: https://www.twinkleinstitute.co.in/mbbs-md-in-russia.php
Posted
Create READ ME
Quantium is a leading data science and AI firm, founded in Australia in 2002. Quantium combines the best of human and artificial intelligence to power possibilities for individuals, organisations and society.
You are part of Quantium's retail analytics team and have been approached by your client, the Category Manager for Chips, who wants to better understand the types of customers who purchase Chips and their purchasing behaviour within the region.
Conduct analysis on client's transaction dataset and identify customer purchasing behaviours to generate insights and provide commercial recommendations.
- Examine transaction data - look for inconsistencies, missing data across the data set, outliers, correctly identified category items, numeric data across all tables. If here any inconsistency make the necessary changes in the dataset and save it. Having clean data will help when it comes to the analysis.
- Examine customer data - check for similar issues in the customer data, look for nulls and when you are happy merge the transaction and customer data together so it’s ready for the analysis ensuring you save your files along the way.
- Data analysis and customer segments - in analysis it may sure about the matrices – look at total sales, drivers of sales, where the highest sales are coming from etc. Explore the data, create charts and graphs as well as noting any interesting trends and/or insights were find.
- Deep dive into customer segments – define your recommendation from insights, determine which segments we should be targeting, if packet sizes are relative and form an overall conclusion based on analysis.
Extend your analysis from Task 1 is hepl to identify benchmark stores that allow for the test the impact of the trial store layouts on customer sales.
- Select control stores – explore the data and define metrics for control the store selection – think about what would make them a control store. Look at the drivers and make sure you visualise these in a graph to better determine if they are suited. For this piece it may even be worth creating a function to helpful.
- Assessment of the trial – this one should give some interesting insights into each of the stores, then check each trial store individually in comparison with the control store to get a clear view of its overall performance. We want to know if the trial stores were successful or not.
- Collate findings – summarise your findings for each store and provide an recommendation that we can share with Julia outlining the impact on sales during the trial period.
- When working with a client visualisations are key to helping them understand the data. Be sure to save all your visualisations so we can use them later in our report. We are presenting to our client in 3 weeks so if you could submit your analysis by mid next week that will give us great amount of time to discuss findings and pull together the report.
WIP: MILLION LINES OF REFACTORING BECAUSE FUCK YOU
"10:40am. Now let me do what I want which is catch up with Ayakashi Triangle.
12:50pm. Let me finish the chapter and I will start.
I need to internailze it. Algorithms, architectures, models and methods will come and go. But no amount of that will be enough if my approach itself is fundamentally flawed.
Poker is a great lesson in ML for me.
It highlights something that I've known all along - it is a lot easier to create a simulation than it is to beat it. I could easily implement a chess game, but making an agent for it would be way harder. One day I will understand what it is that I should be doing. But now in the third run, I'll make the first step.
12:55pm. Let me start. I need to get back to Julia. I should try out Gen.
https://www.gen.dev/tutorials/
I'll actually try some of the examples.
1pm. Added Gen, now it is building PyPlot.
1:10pm. I imported PyPlot and it started downloading more thnigs again. Sigh.
While that is going on I can't play with the examples. I hope that importing Gen won't start another compilation spree.
///
Modeling with Black-box Julia code This tutorial shows how ‘black-box’ code like algorithms and simulators can be included in probabilistic models that are expressed as generative functions.
Modeling with TensorFlow code This tutorial shows how to write a generative function that invokes TensorFlow code, and how to perform basic supervised training of a generative function.
Basics of Iterative Inference in Gen This tutorial introduces the basics of inference programming in Gen using iterative inference programs, which include Markov chain Monte Carlo algorithms.
Data-Driven Proposals in Gen Data-driven proposals use information in the observed data set to choose the proposal distibution for latent variables in a generative model. This tutorial shows you how to use custom data-driven proposals to accelerate Monte Carlo inference.
///
Some of these tutorials would be rather interesting to me if they were actually written. Tsk.
1:15pm.
@trace(normal(slope * x + intercept, 0.1), (:y, i))
At any rate, Gen can take in tuples. This is good.
Addresses can be any Julia value.
That meets one requirement. Good.
If Python wasn't so slow I might be playing with Pyro right now, but given the 2,000x gap between it and F#, I think I am better off with Julia.
function render_trace(trace; show_data=true)
# Pull out xs from the trace
xs = get_args(trace)[1]
xmin = minimum(xs)
xmax = maximum(xs)
if show_data
ys = [trace[(:y, i)] for i=1:length(xs)]
# Plot the data set
scatter(xs, ys, c="black")
end
# Pull out slope and intercept from the trace
slope = trace[:slope]
intercept = trace[:intercept]
# Draw the line
plot([xmin, xmax], slope * [xmin, xmax] .+ intercept, color="black", alpha=0.5)
ax = gca()
ax[:set_xlim]((xmin, xmax))
ax[:set_ylim]((xmin, xmax))
end;
figure(figsize=(3,3))
render_trace(trace)
This is not plotting anything for me. how do I get it to show?
...The internet is down for some reason.
...My dad was fiddling with the router trying to get it to go faster.
https://github.com/JuliaPy/PyPlot.jl
Let me see how to get this to work.
sys:1: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
What do I do?
pygui(:qt)
Ok, this proves that I have the Qt backend.
Ohhhh, goddamit. This is what the scrap gets me. I need to mess with this issue instead of going through the Gen tutorial.
https://discourse.julialang.org/t/non-gui-backend-so-cannot-show-the-figure/28445/2
https://github.com/JuliaPy/PyPlot.jl#choosing-a-python-gui-toolkit
Only the Tk, wxWidgets, GTK+ (version 2 or 3), and Qt (version 4 or 5; via the PyQt5, PyQt4 or PySide), Python GUI backends are supported by PyPlot. (Obviously, you must have installed one of these toolkits for Python first.) By default, PyPlot picks one of these when it starts up (based on what you have installed), but you can force a specific toolkit to be chosen by importing the PyCall module and using its pygui function to set a Python backend before importing PyPlot:
$INSTALL_DIR/matplotlib/mpl-data/matplotlibrc
Where did Julia put it? Also...
using PyCall
This thing does not work.
Am I supposed to add it by hand?
C:\Users\Marko\.julia\conda\3\lib\importlib\__init__.py:127: MatplotlibDeprecationWarning:
The matplotlib.backends.backend_qt4agg backend was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
return _bootstrap._gcd_import(name[level:], package, level)
Sigh.
sys:1: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
Holy shit, I still get the same warning.
Isn't there a Julia native ploting library somewhere? This is ridiculous.
https://docs.juliaplots.org/latest/
Let me try this.
2pm. Have I been stuck on this for almost an hour already?
https://discourse.julialang.org/t/path-of-this-package/16401
I should have tried looking for the path of PyPlot and editing whatever config file is there.
No, but that does not tell me where matplotlib is.
using Plots
# define the Lorenz attractor
Base.@kwdef mutable struct Lorenz
dt::Float64 = 0.02
σ::Float64 = 10
ρ::Float64 = 28
β::Float64 = 8/3
x::Float64 = 1
y::Float64 = 1
z::Float64 = 1
end
function step!(l::Lorenz)
dx = l.σ * (l.y - l.x)
dy = l.x * (l.ρ - l.z) - l.y
dz = l.x * l.y - l.β * l.z
l.x += l.dt * dx
l.y += l.dt * dy
l.z += l.dt * dz
end
attractor = Lorenz()
# initialize a 3D plot with 1 empty series
plt = plot3d(
1,
xlim = (-30, 30),
ylim = (-30, 30),
zlim = (0, 60),
title = "Lorenz Attractor",
marker = 2,
)
# build an animated gif by pushing new points to the plot, saving every 10th frame
@gif for i=1:1500
step!(attractor)
push!(plt, attractor.x, attractor.y, attractor.z)
end every 10
This completely works. Great. It opens a plotting window on the side and shows the gif changing.
http://docs.juliaplots.org/latest/
The stuff here is pretty impressive and most importantly it works. Though now I have the issue of needing to familiarize myself with it if I want to follow the Gen examples. Well, let me do it. I mean I should be able to do this given what I am trying to do.
2:30pm. Now I am reading ML sub posts while I wait for RDatasets to install. Sigh.
2:35pm. It seems to be still installing Flux. Agh, why did I fall for this trap of actually trying out the example. I was supposed to be going through the tutorial, but 1.5 hours is down the drain already.
2:40pm. This is ridiculous. It is still precompiling Flux.
...But I can't cancel it. I'll need Flux to do ML at some point in the future.
2:45pm. How should I spend this time here? I am just reading the Deepmind thread.
Let me take a look at the OmegaModels.
2:50pm. It is still precompiling Flux. This is just way too much Julia.
3pm. It is still precompiling it.
Let me skim all the examples and I'll think what comes next after that.
Ok, it finished Flux, but why is it precompiling Omega again?
3:05pm. This is a waste of time. Since I am spending my entire day just installing Julia packages, let me read the probability theory book by Jaynes while that is going on. Looking at the source for Zenna's models is not teaching me much.
I've gone over the first 2 chapters. Let me read the third.
3:25pm. It is still compiling Omega. Why is it precompiling that package? No way that the RDataset can have anything to do with it. Changing one of the its dependencies must have triggered it.
3:35pm.
104 dependencies successfully precompiled in 3757 seconds (174 already precompiled)
It took 62.6 minutes to precompile it all. I do not feel like it. Let me continue reading the book for now. Let me just add the StatPlots package while I am at it as well.
3:45pm. That went through in only 5m.
Let me get back to the book.
At the end of this chapter (Exercise 3.6), the reader will have an opportunity to demonstrate this directly, by calculating a backward inference that takes into account a forward causal influence.
I should do the exercises in this book with the aid of a probabilistic language. I'll pick Gen. That will tell me where I stand.
4:05pm. 93/758.
This stuff where he explains how information about the future affects the past through the lens of probability theory is interesting. There is an implication of how fate would work.
4:30pm. 96/758. This chapter is not as bad as I thought it would be. I mean, I am bored, but this is fairly informative. Looking at rule applications from different angles.
4:50pm. 101/758. There are some exercises here. Since I am not a pen and paper guy, I should try to solve this with the help from Gen. I should write a program that does it for me.
In probability theory there is a very clever trick for handling a problem that becomes too difficult. We just solve it anyway by:
- making it still harder
- redefining what we mean by ‘solving’ it, so that it becomes something we can do
- inventing a dignified and technical-sounding word to describe this procedure, which has the psychological effect of concealing the real nature of what we have done, and making it appear respectable
In the case of sampling with replacement, we apply this strategy as follows:
- Suppose that, after tossing the ball in, we shake up the urn. However complicated the problem was initially, it now becomes many orders of magnitude more complicated, because the solution now depends on every detail of the precise way we shake it, in addition to all the factors mentioned above.
- We now assert that the shaking has somehow made all these details irrelevant, so that the problem reverts back to the simple one where the Bernoulli urn rule applies
- We invent the dignified-sounding word randomization to describe what we have done. This term is, evidently, a euphemism, whose real meaning is: deliberately throwing away relevant information when it becomes too complicated for us to handle.
Lol.
It is probably safe to say that no limit theorem is directly applicable in the real world, simply because no mathematical model captures every circumstance that is relevant in the real world. Anyone who believes that he is proving things about the real world, is a victim of the mind projection fallacy
6:10pm. Let me have lunch here. I am having some ideas. The way I am thinking about training the policy now will make it into a prediction problem which will collapse variance and actually make it tractable on poker.
It is a very different way of doing things compared to before. Rewards won't even enter the picture in training it, rather rewards will only be a factor in EV calculations. But those will be derived from simulation.
I won't require the net to learn to hand read anymore, instead I'll reduce the problem to a couple of simpler features. In fact now that I think about it, I do not have to even feed the potential draw data, instead I should just calculate the improvement potential for the hand along with its probability. I should simulate n games and take the top 5,10,20,30,50% as the potential hand strength on the next street. That might be the right way to think about this.
It would be better than have the net think about hand reading at all.
Hmmm, actually in that case, in order to amortize things, I could train a supervised hand reader to predict the strength. Estimating the hand strength on the current street is O(n), but estimating it on the future street would be O(n^2).
But still this is the odds of an all in, it has no bearing on the opponent's policy. So it can be done in a purely supervised fashion. NNs are very good at that.
There are a lot of such places where NNs could help. I just have to resist the urge to try to solve the entire problem end to end.
6:45pm. Done with lunch.
What I described above is the way to break into the 3/5 range. I have to take what Gary Marcus says seriously about NNs not able to reason rather than hoping it all works out. There is a way to have NNs guide the simulator to informed guesses, but I should be wary of trying to do the full task with them.
This is what I've been missing. This is what has been hindering me in the last 7 months.
I benefited greatly from memoization in the making of Spiral. But I can do it approximately with NNs as well.
And this fits the agenda of probabilistic programming like a glove.
120/768. Let me stop the book at chapter 4 here. I'll continue it tomorrow along with the Gen tutorial. I've installed the relevant Julia packages so hopefully it won't make me wait for hours again.
6:55pm. No mail, let me close here for the day."
Trying to add stupid asss video shit because fuck you too kade dev
Update and rename README.md to TLCapps
True love club . God is both the center all life and center of love, life is the center of love, the give loves the word and life the world is short. we can receive them for eternity. act with confidence, just like the God's children lived when the as
Preparing for Update 2.4.0
- Updated Roots to 1.12.2-3.1.2. This fixes the server crash with the Runic Crafter, a crash when using a Comparator on a block, and other bugs: https://www.curseforge.com/minecraft/mc-mods/roots/files/3461007.
- Added the Knowledge Sharer mod, making it possible to share Thaumcraft knowledge between team members. To use, both player need to hold a Thaumonomicon, and the sharer has to right click the receiver.
- Aerogel is now craftable with Tier 1 and 2 Mystical Agriculture Essences.
- Conduit Facades now craft 4 at a time.
- IE Wires are now much less likely to burn up. (Increased their transfer rate, but this is still limited by the connector transfer rate.)
- The Sight Reagent now takes a Splash Night Vision Potion instead of an Insight III book, because enchantment IDs could get randomly shuffled on servers.
- IE Uranium and Platinum deposits are now actually correctly removed.
- The Immersive, Actually Additions, Extra Utiltiies and Lightning Crushers can now crush regular and charged Certus to Dust, and the AE2 Grindstone and the Thermal Pulverizer can now crush charged Certus as well.
- The SAG Mill can now process Gold Ores and Oil Sand Ores.
- The Creative Vending Upgrade now requires 3 Creative Capacitor Banks instead of 3 Creative Buffers, because after I've moved the Creative Buffer further, the Vending Upgrade was made uncraftable.
- Fixed a Command Block not turning into a Spawner in the Venus Dungeon.
- There is currently a bug with EnderTweaker where custom Alloy Smelter recipes that used inputs with multiple items in the same stack could be created with only 1 item in the stack. Because of this, the Resonating Orb and the Festive Ball can no longer be crafted in the Alloy Smelter, slightly altered the Dark Soularium recipe, and moved all other affected recipes from scripts to EnderIO's user_recipes.xml: Blank Dark Steel Upgrade, Snowflake, Electrotine/Red/Glowing/Cheesy Silicon Compounds, Modularium Ingot, Raw Meteoric Iron, Blutonium Ingot, Black Iron Ingot, Potentia Sphere, Materialized Vengeance Spirit, Darkened Apple, Organic Black Dye, all custom Alchemistry Compounds, Tough Galactic Plating, Crystal Bundle, all Blood Magic Runes, Demon Heart duplication recipe, Crystal Matrix Ingot, Chunk of Coralium.
- Increased the Capacity of all Mekanism Energy Cubes. (800k / 3.2M / 12.8M / 51.2M -> 4M / 12M / 36M / 108M)
- The Ring of Growth no longer requires a Roots Spell dust, as it became uncraftable in Roots 3.1.
- Added more Fossil, Temple and Gneiss Ores to Erebus.
- Fixed Pulverized Obsidian giving Molten Brass in the Magma Crucible - now it gives Lava.
- Compasses and Clocks can now the smelted in the Induction Smelter.
- The Wildwood Block now requires Manaweave Cloth instead of Spellbinding Cloth since Extended Crafting can't use IIngredientTransformers. Related: BlakeBr0/ExtendedCrafting#126.
- The Atomic Disassembler now drops all blocks it mines.
- Added a custom recipe for the Handy Bag (Large), and moved its quest from Chapter 13 to 19.
- Rearranged the quests in Chapter 13, and the "Memory Card (items) 12 B" quest no longer requires the Ender Storage quests.
- The Necronomicon quests now accept a Necronomicon with any PE.
- Added a quest in Chapter 18, explaining the 4 remaining Blood Magic Runes.
- Fixed some typos: TEPid Brine quest: "xzy" -> "xyz" and "cube" -> "cuboid", JetPlate quests: "change" -> "charge", Magical Tablet: "fulfil" -> "fulfill".
- Fixed misinformation in the Self Sacrifice Rune quest "10% additively" -> "5% additively".
- Moved around the Capacitor quests in Chapter 4 so they're available earlier.
- Roots quests now correctly state that any kind of Elemental Soil works.
- A Flute with any NBT is now accepted in Chapter 2.
- The "What is BQ" quest now correctly describes the quest layout.
- The Excavator quest now correctly states that the consumption rate is 1024 RF/tick, not 4096.
- The Worn Stone Brick Path quest now rewards a Large Bloodstone Tile instead of 2 Demon Blood Shards (could skip progression). (Thank you to WaitingIdly for implementing most of these quest changes!)
- Added the Pixel Gaming server as a Featured Server.
- The Eye of the Watcher tooltip now correctly references the Call of the Watcher.
- Added a warning tooltip for the Mysterious Clock, Call of the Watcher and Horde Horn to not use them from your off-hand, as it could delete a different item in your inventory.
- Added a tooltip for the Infinity Booster Card: "Can only be crafted, is not consumed by operation."
- Corrected a loading screen tip: "Slimestring" -> "Slimesling".
Replace 'next lint' with a proper eslint setup
Next lint is pretty unopinionated. By default it does the following
- Extend react/recommended
- Extend react-hooks/recommended
- Add 18 Next.js specific rules
- Manually configure 9 other rules
This is too unopinionated for my taste. I WANT my linter to correct me when I do stupid shit.
By contrast, Airbnb has an opinion on more than 450 rules...
OTHER THINGS WORTH MENTIONING:
- tsconfig.eslint.json is a hack mentioned by eslint-typescript and airbnb-typescript. It avoids a problem where eslint complains when trying to lint a file not described in 'included' in tsconfig.