3,135,951 events, 1,534,165 push events, 2,481,058 commit messages, 196,278,482 characters
VSCode
- snapd, sigh, whatever... it's usable... kinda hate this trend of snap/flatpak everything than again it's easy-to-use... but meh, still not emotionally attached to it... but for first iteration we'll leverage snap on the install of vscode
Resolves: main
Redoes how geese handle eating shit, it was fucking stupid and caused harddels, and while this method is technically slower in the best case, it's a fucking goose
Implementation of store command (#9)
- Updated client-side files
Should be mostly done as far as this branch is concerned--wanted to enable uploading a file to the server, but that is all that's left to be implemented for this task
- Updated server-side files
Please note, these files will not work with Firebase. Also, the requests to the data store don't actually work (oops). I've figured that the issue is the permissions/authentication, which needed to be changed anyway so we can keep the storage bucket private. At the time of this commit, honestly just too tired to keep working on it.
- Fixed authentication(?)
Yeah, need to upload the env file but that should give the webapp authentication
-
Create .env
-
Changed implementaton of dicomweb commands-untested and unimplemented
DICOM web commands have begun being implemented I made a bit of progress on two different implementations--through nodejs and python. The python approach will be used if we ultimately feel a need to switch to flask, however it presents issues I'm not familiar dealing with, while I am familiar with the nodejs issues.
-
Updating handleDicom to reflect dicomweb commands
-
Updating web files
Implemented some things to allow for testing/implementation of the store command
- Implemented Dicomweb store
FUCK YOU BOAT PEOPLE
The modern Fleet 56 Ships The Shittillia =129 shit ships
Remove support for redundant %_filter_GLIBC_PRIVATE mechanism
This was kinda ugly-but-necessary when added back in 2003 (commit 752cac72e220dcad4e6fce39508e714e59e3e0a1) but entirely redundant since the "new" dependency generator in rpm >= 4.9.x with arbitrary filtering support. The handful of packages using it can just as well achieve the same (and more) without special hacks in rpm with:
%global __requires_exclude GLIBC_PRIVATE
%global __provides_exclude GLIBC_PRIVATE
Remove ObjCThemis.xcodeproj (#704)
- Remove ObjCThemis.xcodeproj
The idea behind building "objcthemis.framework" has been to unify import syntax between Carthage and CocoaPods. Unfortunately, it turned out to be a mistake. "objcthemis.framework" does not work without "themis.framework" being present alongside it because of how module resolution works. Despite "objcthemis.framework" providing the same "themis" module as "themis.framework", the compiler will look for a framework named "themis.framework" when resolving "import themis".
Moreover, the original issue that "objcthemis.framework" has been called to rectify can be resolved more elegantly by importing the module:
@import themis;
which work well with "themis.framework" in both Carthage and CocoaPods.
Since "objcthemis.framework" does not bring any value, remove it. Move all new things added to ObjCThemis.xcodeproj into Themis.xcodeproj (such as testing Swift 4 vs 5). Remove the import warning. Now Carthage will build only one framework: "themis.framework" from Themis.xcodeproj.
I am sorry for the trouble and confusion of this fizzled migration.
- Change "product name" to "themis"
Make sure that Xcode targets produce "themis.framework", not "objcthemis.framework".
- Recreate Xcode schemes
It seems that some components stick the schemes after renaming. Recreate them to make sure that we're building "themis.framework" and there are no traces of the old Xcode project.
- Bring back proxy umbrella header "themis.h"
Since the framework is named "themis.framework", its umbrella header is expected to be called "themis.h". The actual umbrella header for ObjCThemis is "objcthemis.h" which we simply include.
- Use alternative imports in unit tests
One of the reasons for "objcthemis.framework" existence was to run ObjCThemis unit tests from Xcode. Initially, "themis.framework" has prevented that due to import issues, and "objcthemis.framework" has allowed #import <objcthemis/objcthemis.h> to work. Now that latter is gone, the unit-tests are broken again.
However! It seems that using modular imports works for Xcode and Carthage (which uses Xcode project). The bad news here is that it does not work for CocoaPods, which still works only with the old form because CocoaPods does some special wicked magic with headers, putting them into the "objcthemis" directory.
I do not have much time and willingness to deal with this stupidity anymore right now, so here's a compromise: Carthage uses its form, CocoaPods use their form, and you get this TODO to maybe get rid of this wart some time later.
(cherry picked from commit 5522acee08f7037e5d7e9caf3616e354eaaeff8e)
Add missing OpenSSL includes (#684)
- Add missing OpenSSL includes
Add those files use BIGNUM API of OpenSSL but do not include relevant headers. Due to miraculous coincidence, this seems to somehow work for the OpenSSL versions we use, but only because either existing headers include this "bn.h" transitively, or because the compiler generates code that kinda works without function prototype being available.
However, curiously enough, this breaks when building Themis for macOS with recent OpenSSL 1.1.1g but not with OpenSSL 1.0.2, or OpenSSL 1.1.1g on Linux. The issue manifests itself as missing "_BN_num_bytes" symbol. Indeed, there is no such symbol because this function is implemented as a macro via BN_num_bits(). However, because of the missing header, the compiler -- being C compiler -- decides that this must be a function "int BN_num_bytes()" and compiles it like a function call.
Add the missing includes to define the necessary macros and prototype, resolving the issue with OpenSSL 1.1.1g. It must have stopped including <openssl/bn.h> transitively, revealing this issue.
This is why you should always include and import stuff you use directly, not rely on transitive imports.
P.S. A mystery for dessert: BoringSSL backend includes <openssl/bn.h>.
- Treat warnings as errors in Xcode
In order to prevent more silly issues in the future, tell Xcode to tell the compiler to treat all warnings as errors. That way the build should fail earlier, and the developers will be less likely to ignore warnings.
- Fix implicit cast warnings
Now that we treat warnings as errors, let's fix them.
themis_auth_sym_kdf_context() accepts message length as "uint32_t" while it's callers use "size_t" to avoid early casts and temporary values. However, the message length has been checked earlier and will fit into "uint32_t", we can safely perform explicit casts here.
- Suppress documentation warnings (temporarily)
Some OpenSSL headers packaged with Marcin's OpenSSL that we use have borked documentation comments. This has been pointed out several times 1, but Marcin concluded this needs to be fixed upstream.
Meanwhile, having those broken headers breaks the build if the warnings are treated as errors. Since we can't upgrade Marcin's OpenSSL due to other reasons (bitcode support), we have no hope to resolve this issue.
For the time being, suppress the warnings about documentation comments.
- Fix more implicit cast warnings
There are more warnings actual only for 32-bit platforms. Some iOS targets are 32-bit, we should avoid warnings there as well.
The themis_scell_auth_token_key_size() and themis_scell_auth_token_passphrase_size() functions compute the size of the autentication token from the header. They return uint64_t values to avoid overflows when working with corrupted input data on the decryption code path. However, they are also used on the encryption path where corruption is not possible. Normally, authentication tokens are small, they most definitely fit into uint32_t, and this is the type used in Secure Cell data format internally.
It is not safe to assign arbitrary uint64_t to size_t on 32-bit platforms. However, in this case we are sure that auth tokenn length fits into uint32_t, which can be safely assigned to size_t.
Note that we cast into uint32_t, not size_t. This is to still cause a warning on platforms with 16-bit size_t (not likely, but cleaner).
(cherry picked from commit 1ca96de89b66391114f615658fbc4819aa248b9b)
Tina Mumbai Escorts Service
Sweet And Unpretentious Woman With Attractive Abilities For You. TINA http://tinashrma.com ( MUMBAIESCORTSSERVICE TINAESCORTSSERVICE ) Hi my sweethearts, my name is TINA and I'm a 22-year-old call young lady. I need to satisfy you. I'm an extremely committed young lady and I guarantee you that you will be exceptionally fulfilled.
I'm a thoughtful, willing, and particularly hot accomplice all around. I need to spend a lovely and rich second with you, prepared to satisfy you in anything you desire.
I give immaculate treatment and superb sex that you will a lot of need to go over. I need to satisfy your cravings and delights and I will stay in your mind for a long time after our date.
I'm the right youngster for you. You will see that I am uncommonly mindful and occupied with bed. I need to make you satisfied a lot, all with a great deal of contribution and sensuality.
In me, you will track down a sexy, hot, blazing, loving, and exquisite lady anxious to satisfy your cozy longings. I welcome you to appreciate unbridled sex, envision every one of the seemingly insignificant details we can do together.
Reach me to give you more data, you will be cheerful after you've been with me. Try not to stop for a second to WhatsApp me now on 9867144632. I'm situated in Mumbai and I likewise offer outcalls. FOR MORE DETAIL CONTACT US @ http://tinashrma.com
Really starting to see the evils of weak typing. psutil passes a Process object to cpu_percent() if you call it from a Process object. Y'know, how that kind of thing works under-the-hood in OOP. But it's (probably) static, and doesn't have a 'self' var. And it gives ZERO shits. no errors, no warnings, nothing.
What the fuck. I don't even.
Try to fix early boot
NixOS kind of mutilates the boot order; breaking all kinds of important assumptions in systemd (e.g. that static /dev nodes are created before udev is started up) in order to fix nixops send-keys. This is fucked.
It's beyond me why we would need all these kind of hacks. nixops send-keys should be implemented in stage-1 ; not mutilate stage-2
Fix kmod-static-nodes.service by changing module path
In nixos; kmod looks in /lib/modules (stage-1 traditionally), and in stage-2 looks in /run/booted-system/kernel-modules/lib/modules
However we patched kmod-static-nodes.service to only trigger if /run/booted-system exists which traditionally didn't exist in nixos initrd.
Two ways to fix it:
- add that path
- make the paths looked up in kmod-static-nodes.service align with the kmod package
See: https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kmod/default.nix#L7 https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/systemd/0016-kmod-static-nodes.service-Update-ConditionFileNotEmp.patch#L21
Merge remote-tracking branch 'origin/master' into rf-eval-results
- origin/master: (45 commits)
Updating mailmap
TST: annexrepo: Adjust test for TestRepo change
BF: annexjson2resulti() could not handle file=null results
TST: no_annex: Avoid racy failure on adjusted branches
BF: Pre-population of testrepos already needs HTTP test server setup
RF: Move AutomagicIO functionality to -deprecated
very basic test for completion helper
RF: hello java my old friend, I've come to love your kludge again
RF: Benchmark run needs -deprecated
TST: Make test robust to moving
ls
command DOC: Removels
from docs BF: Consolidate dry(-)run arguments and issue deprecation warning TST: Adjust tests of github helpers TST: Adjust for new reporting behavior BF: Let docs match actual behavior RF: Deprecate --dryrun in favor of --dry-run RF: Github helpers now report records to be able to continue in error ENH: remove a shadow from an attempt to have secrets in travis CI MNT: Post-release dance RF: Move otherwise unused safe_print() helper fpr ls() to -deprecated ...
Conflicts: datalad/distribution/create_sibling_github.py - took this datalad/interface/base.py - took this datalad/interface/ls.py - removed as well, not sure if ever would be tuned up in -deprecated
the idea with infer works good, but I have to find a solution for Nats, because they are no Expr and maybe it is a bad idea to try to solve it in infer as well. Maybe I have to paste it directly while parsing, but I really would like to find a solution in infer as well. I only need to convert a Nat into a Expr so that they are able to work with it. But that sounds easy, but it is not easy, because it is not intended to do that. Probably I really have to have one shitty exeption and that is Nat. Where it does not is ok to first make the declaration and later implement it. And that would suck, if that is the case. Because that would a little bit unintuitive, because everything works in that way that we only need the declaration before and if I am correct you event don't have to do that in my soulution. So you can write 'f=\x-> fkt x' and declare and implent fkt at a later point in the file. I have to make sure if that is really the case, but with my understanding it should work like that
[CHG] core, web: deprecate t-raw
Add a big fat warning when the qweb compiler finds a t-raw
.
t-esc
should now be used everywhere, the use-case for t-raw
should
be handled by converting the corresponding values to Markup
objects. Even though it's convenient, this constructor should never
be made available in the qweb rendering context (maybe that should be
checked for explicitely?).
Replace werkzeug.escape
by markupsafe.escape
in
odoo.tools.html_escape
, this means the output of html_escape
is
markup-safe.
Updated qweb to work correctly with escaping and Markup
, amongst
other things QWeb bodies should be markup-safe internally (so that a
t-set
value can be fed into a t-esc
). See at the bottom for the
attributes handling as it's a bit complicated.
to_text
needed updating: markupsafe.Markup
is a subclass of str
,
but str
is not a passthrough for strings. So Markup
instances
going through would be converted to normal str
, losing their safety
flag. Since qweb internally uses to_text
on pretty much
everything (in order to handle None / False), this would then cause
almost every Markup
to get mistakenly double-escaped.
Also mark a bunch of APIs as markup-safe by default
- html_sanitize output.
- HTML fields content, sanitization is applied on intake (so stripped
by the trip through the database) and if the field is unsanitised
the injection is very much intentional, probably. Note: this
includes automatically decoding bytes as a number of default values
& computes yield bytes, which Markup will happily accept... by
repr-ing them which is useless. This is hard to notice without
-b
. - Script-safe json, it's rather the point (though it uses a non-standard escaping scheme).
- Note that
nl2br
, kinda: it should work correctly whether or not the input is markup-safe, this means we should not need to escape values fed tonl2br
, but it doesn't hurt either.
Update some qweb field serialisations to mark their output as markup-safe when necessary (e.g. monetary, barcode, contact). Otherwise either using proper escaping internally or doing nothing should do the trick.
Also update qweb to return markup-safe bytes: we want qweb to return
markup-safe contents as a common use-case is to render something with
one template, and inject its content in an other one (with Python code
inbetween, as t-call
works a bit differently and does not go through
the external rendering interface).
However qweb returns bytes
while Markup
extends str
. After a
quick experiment with changing qweb rendering to return str
(rather
unmitigated failure I fear), it looks like the safest tack is to add a
somewhat similar bytes-based type, which decodes to a Markup
but
keeps to bytes semantics.
For debugging and convenience reasons, MarkupSafeBytes does not
stringify and raises an error instead (__repr__
works fine). This is
to avoid implicit stringifications which do the wrong thing (namely
create a string "b'foo'"
).
Also add some configuration around BytesWarning (which still has to be
enabled at the interpreter level via -b
, there's no way to enable it
programmatically smh), and monkeypatch showwarning
to show warning
tracebacks, as it's common for warnings to be triggered in the bowels
of the application, and hard to relate to business logic without the
complete traceback.
t-esc
is a bit confusing for the new behaviour of "maybe escape
maybe not", so add a t-out
alias with the same behaviour.
Unlike t-raw
, t-esc
is only soft-deprecated for now: there are
thousands of instances, so editing all the templates is not
great. Eventually we'll add a ci/style
to prevent addition of new
ones, and eventually we might do a bulk-replace and hard-deprecate.
There are a few issues with respect to attributes. The first issue is
that markup-safe content is not necessarily attributes-safe
e.g. markup-safe content can contain unescaped <
or double-quotes
while attributes can not. So we must forcefully escape the input, even
if it's supposedly markup-safe already.
This causes a problem for script-safe JSON: it's markup-safe but
really does its own thing. So instead of escaping it up-front and
wrapping it in Markup, make script-safe JSON its own type which
applies JSON-escaping *during the __html__
call.
This way if a script-safe JSON object goes through markupsafe.escape
we'll apply script-safe escaping, otherwise it'll be treated as a
regular strings and eventually escaped the normal way.
A second issue was the processing of format-valued
attributes (t-attf
): literal segments should always be markup-safe,
while non-literal may or may not be. This turns out to be an issue if
the non-literal segment is markup-safe: in that case when the
literal and non-literal segments get concatenated the literal segments
will get escaped, then attributes serialization will escape
them again leading to doubly-escaped content in attributes.
The most visible instance of this was the snippet_options
template,
specifically:
<t t-set="so_content_addition_selector" t-translation="off">blockquote, ...</t>
<div id="so_content_addition"
t-att-data-selector="so_content_addition_selector"
t-attf-data-drop-near="p, h1, h2, h3, .row > div > img, #{so_content_addition_selector}"
data-drop-in=".content, nav"/>
Here so_content_addition_selector
is a qweb body therefore
markup-safe, When concatenated with the literal part of
t-atff-data-drop-near
it would cause the HTML-escaping of that
yielding a new Markup object. Normal attributes processing would then
strip the markup flag (using str()
) and escape it again, leading to
doubly-escaped literals.
The original hack around was to unescape() Markup
content before
stringifying it and escaping it again, in the attribute serialization
method (_append_attributes
).
That's pretty disgusting, after some more consideration & testing it
looks like a much better and safer fix is to ensure the
expression (non-literal) segments of format strings always result in
str
, never Markup
, which is easy enough: just all str()
on the
output of strexpr. We could also have concatenated all the bits using
''.join
instead of repeated concatenation (+
).
Also add a check on the type of the format string for safety, I think it should always be a proper str and the bytes thing is only when running in py2 (where lxml uses bytestrings as a space optimization for ascii-only values) but it should not hurt too much to perform a single typecheck assertion on the value... instead of performing one per literal segment.
Note: we may need to implement unescape anyway, because it's still
possible to get double-escaping with the current scheme: given an
explicitly escape-ed foo
and t-att-foo="foo"
, foo
will be
re-escaped.
fixup! [CHG] core, web: deprecate t-raw
Final V10
Added Morphling
Fixed Double Warper
NERFED INCHLING EVEN MORE GOD FUCKGING DAMN FUCK THOSE BITCHES
Remove the external notion of an MTThread.
Before this commit we required users to create threads through the MT struct, so
that we could pass those threads an MTThread
structure. This works well if you
know from the very beginning that you're creating a yk interpreter, but will not
work if you e.g. embed a yk regex interpreter inside a wider program that's
otherwise ignorant about yk (and may create threads via another API).
The "obvious" solution is to make MTThread
a thread_local, but if we do that, we
also have to make MT a global. And that's more-or-less what this commit does,
with the minor tweak that it hides MTThread
from the outside world.
Users can access the MT struct via MT::global()
at any point in their program:
MTThread
is a thread local that is used by MT
internally but doesn't need to
be exposed to the user.
This makes the API simpler to use, perhaps slightly icky in some ways (we've all been brought up to believe that globals are bad!), but probably powerful enough to carry us for a decent distance with, hopefully, only minor tweaks.
Internally, this clearly needs work: MT
s are initialised eagerly but
MTThread
s lazily; and the horrible with
hack needed to get an MTThread
reference to hot_location
is best not mentioned in polite company. Those can,
I believe, be finessed in future PRs without changing the external API. Put
another way: this commit is as close to "minimal change" as I think I can get.
added project and classpath
Update Subspecies.java
fixes name issue where female feral alligators whose names are unknown would parse as "an alligators"
Amarok/project+class path addition to minor edits (#2)
-
added project and classpath
-
Addition of new setFeral methods
adds new setFeral methods that use the gamecharacter's subspecies
- Addition of new setFeral methods
adds new setFeral methods that use the gamecharacter's subspecies
- Custom Parser functions
special parser functions, will likely be culled at some point
- Revert "Custom Parser functions"
This reverts commit bd4a80630ed49dfda0d2c0c0c931a68111596165.
- Revert "Addition of new setFeral methods"
This reverts commit edbd5057c847b42b786e396e5b10ca4b52a1822e.
Update RoomPlayer.java
added a LegConfiguration check for tailed npcs, so that they weren't doing anything with their feet
Add support for Lamia tails to RoomPlayer.java strings
previously, the code made no attempt to check if npcs had a tail, resulting in weird descriptions of lamias tapping or shuffling their feet
Modifications to parsing error displays to help debugging
adding descriptive features to "Error in script parsing" message to show which command has failed. only available when Debug mode is true.
Adds feral descriptors and BodyPartInterface capability to Spinneret
Spinnerets now have special feral text for taur and full feral variations. Spinnerets now access BodyPartInterface alongside OrificeInterface.
Adds Commonwealth English alternitives to 'Mom' and 'Mommy'
Adds new setting to toggle between North American and Commonwealth variants on 'mother' Adds this functionality to npc.mom, npc.mommy, #npc.getPetName and other related methods Adds #game.isCommonwealthMum() public boolean which returns true when using 'mum' over 'mom'
Adds Commonwealth English alternitives to 'Mom' and 'Mommy'
Adds new setting to toggle between North American and Commonwealth variants on 'mother' Adds this functionality to npc.mom, npc.mommy, #npc.getPetName and other related methods Adds #game.isCommonwealthMum() public boolean which returns true when using 'mum' over 'mom'
Adds new utility class in Parsing Engine
New utility class creates special methods accsessable from the parser/xml files Adds getTextFromXMLFile, which returns parseFromXMLFile
Edits commandsList ParserCommand with new discriptions for some TODOs
Also adds new masculine insults
Modifies npc.mom and npc.mommy to be also called as npc.mum and npc.mummy
Edits commandsList ParserCommand with new descriptions2 for some TODOs
Also adds new masculine insults Now you can call blokes wankers, like the Brit you are
Modifies npc.mom and npc.mommy
Lets them now be also called as npc.mum and npc.mummy
Adds Commonwealth English alternatives and more paired names to all set name fields
You can now force your kids, ray, and your elemental to be your cubby bitches
Adds new content to 'Set name' feature
'Mother' and 'Father' are new terms that characters can use with you All 'Set name' screens now includes a list of all valid gendered pairs Fixed npc.mom and npc.mommy commands using the wrong mom/mum term
Modifies sibling detection and interaction text
Adds concept of 'reversed parents' where npc1's dad is npc2's mum and vice versa Kids will now be full blooded relatives if they have the same common parents, Or have the same reversed parents in common, Or if the other sibling is a selfcest child Selftest kids should still see all non-selftest kids as half siblings Kids will be half relatives of they have either only one common parent or one reversed parent in common. Fixes occupant dialogue so siblings parse properly instead of as 'no relation' Done so by changing all 'npc' to 'npc1'
Fixes occupant dialogue mistakes
Fixes petting description where there were parsing issues regarding count of tails
added project and classpath
Addition of new setFeral methods
adds new setFeral methods that use the gamecharacter's subspecies
Custom Parser functions
special parser functions, will likely be culled at some point
Tidying up Homebake before merger
Removes unnecessary code from Setferal method Cuts special parser functions as they are unneeded
Modifies content in 'Set name' feature
Makes code read easier Re-adds the 'full list of paired names' line Corrects parser commands for elementals
Revert "Update Subspecies.java"
This reverts commit fe21d2073f1f82b9c9895eb94ec53e816cf54541.
Experience level gain formula adjustment.
Several variants have made adjustments here to varying degrees, because it's widely agreed upon that the experience scale in vanilla NetHack is too restrictive, meaning that you'll practically never reach an experience level above 16-17 or so just by killing monsters, you'll require some other source (potions of gain level, eating wraith corpses). I had a look at how some other variants address this, and most of them in my opinion went too far in the other direction, making it too easy to level up. So we'll try our own method. It's still an exponential growth curve like vanilla, but the curve ends sooner (level 17 instead of 22), and the experience needed to progress is much less (320,000 per level instead of 10,000,000 per level). Yes, it still requires work to level up if you grind, as it should. But this is a much more realistic take I think. And given that EvilHack's monster generation ramps up as you progress, I don't think it will be too difficult to level up via fighting if that's how the player wants to go about it. We'll see how this change goes over.
Redesigned inefficient loops
There were some inefficient loops that would loop through the entire array of question sets looking for specific sets. This is mainly because I kinda forgot that I could just grab the specific numbered sets. I think I just got used to the idea of looping. Anyway, I redesigned them to fetch the specific question sets rather than look through all of them and filter out the one I want with if statements. This is important, because the way I was doing it before was ridiculously inefficient, with the app having to loop through the whole thing multiple times, unnecessarily.
Revert "raphael-sepolicy: Label audio_hal.in_period_size"
- fuck you xiaomi
This reverts commit 3d9d7a4.
Change-Id: Ie580d3bda8856c13a5d8124e57b4bfb67c4ab56c
Merge pull request #1 from mhughes72/dev
fuck you github
was not able to solve myself
For sure, the love mobiles will roll again on this summer's street parade. Each year, the organisers decide on a fixed order for the decorated trucks. Experience taught them to keep free a side street to be able to bring the trucks into order.
The side street is so narrow that no two cars can pass each other. Thus, the love mobile that enters the side street last must necessarily leave the side street first. Because the trucks and the ravers move up closely, a truck cannot drive back and re-enter the side street or the approach street.
You are given the order in which the love mobiles arrive. Write a program that decides if the love mobiles can be brought into the order that the organisers want them to be.
Input There are several test cases. The first line of each test case contains a single number n, the number of love mobiles. The second line contains the numbers 1 to n in an arbitrary order. All the numbers are separated by single spaces. These numbers indicate the order in which the trucks arrive in the approach street. No more than 1000 love mobiles participate in the street parade. Input ends with number 0.
Output For each test case your program has to output a line containing a single word "yes" if the love mobiles can be re-ordered with the help of the side street, and a single word "no" in the opposite case.
Example Sample input: 5 5 1 2 4 3 0
Sample output: yes
Omg Matvei is a fucking God. Promises are too good to not to use
"9am. I am still thinking about it. I guess something must have clicked during Martens' lecture because I've realized that if I used the entire data set, I'd have no need to keep track of moving averages to get the correct grad modifier.
I've been thinking a lot of ideas on how to send rescale gradients to the other layers, but it never occured to me to think about what if I simply used a large enough batch size and then took the L1 norm of the gradients based on that. If I did that, I'd get invariance to the scale of the rewards automatically!
9:15am. This kind of modulation is extremely important. I would not even have to split the dataset and sample from it one at a time. The net would do the right thing on its own.
I would not have to hack the head to rescale the gradients it sends to the input. I can just take the most meaningful part of the second order methods just so I could make RL work.
Even with something like a single sample, I could keep a ratio at the top level. Or I could just track them intra layer.
I am thinking how to deal with RNNs and it is giving me a headache.
9:25am. RNNs are trouble. I really will need to estimate statistics separately for each of the layers should I decide to go down that route. But it is worth it.
I seriously have no idea why the method I have in mind is not used considering all the trouble with gradient propagation. It is not like rescaling the gradient updates using the local norm is hard. It makes a lot more sense than getting the variance after all the values have been added together like in Adam and RMSprop.
https://arxiv.org/pdf/2003.07845.pdf PowerNorm - Rethinking Batch Normalization in Transformers
I really should just use a larger batch size. Let me read this paper. I should be able to learn something from it.
The work of (Ioffe, 2017) proposed batch renormalization to remove/reduce the dependence of batch statistics to batch size. It was shown that this approach leads to improved performance for small batch training as well as cases with non-i.i.d. data. Along this direction, the work of (Singh & Shrivastava, 2019) proposed “EvalNorm,” which uses corrected normalization statistics. Furthermore, the recent work of (Yan et al., 2020) proposed “Moving Average Batch Normalization (MABN)” for small batch BN by replacing batch statistics with moving averages.
I should get familiar with this.
10:05am. This paper is really interesting. How did they derive the intermediate gradient update for PN? I do not understand where the equation comes from. In fact, something like this was something I've been wondering a while now - how to approximate the gradient through a moving average.
Furthermore, the recent work of (Yan et al., 2020) proposed “Moving Average Batch Normalization (MABN)” for small batch BN by replacing batch statistics with moving averages.
I'd bet I'd find something on this subject in this paper.
10:35am. https://arxiv.org/abs/2001.06838 Towards Stabilizing Batch Statistics in Backward Propagation of Batch Normalization
I can't figure out the approximate moving average update in power norm. I'd have to play around with it in order to understand it. Let me read this paper next.
https://arxiv.org/abs/1711.03953 Breaking the Softmax Bottleneck: A High-Rank RNN Language Model
This also caught my eye in the appendix. I am interested in what could replace a softmax.
10:40am. Wow MABN works even with a batch size of 1. Ok, I'll admit that there have been a few good things done since I was last active in ML. I should take the time to study batch norm properly.
Found this while Googling the sub for 'moving average'.
https://arxiv.org/abs/2010.07468 AdaBelief Optimizer - Adapting Stepsizes by the Belief in Observed Gradients
It got highly upvoted. Let me read it.
10:50am. This is actually pretty great. Rather than obsessing about my own thing, I feel like at a buffet table again. This was the feeling I had back in 2018 where everything was a meal.
To solve the problems above, we propose “AdaBelief”, which can be easily modified from Adam. Denote the observed gradient at step t as gt and its exponential moving average (EMA) as mt. Denote the EMA of g 2 t and (gt − mt) 2 as vt and st, respectively. mt is divided by √ vt in Adam, while it is divided by √ st in AdaBelief. Intuitively, √ 1 st is the “belief” in the observation: viewing mt as the prediction of the gradient, if gt deviates much from mt, we have weak belief in gt, and take a small step; if gt is close to the prediction mt, we have a strong belief in gt, and take a large step. We validate the performance of AdaBelief with extensive experiments. Our contributions can be summarized as:
Ah this is it. I was wondering how to use the gradient centering information for ages. To think it would be like this!
I thought along the lines of centering the gradients and gave it up due to bias issues, but yes, using it to set the learning rate is an option!
I am still going to go with my idea of tracking the L1 norm of the grads before adding them to the weight matrix.
I need that for reward invariance, but I'll be using this optimizer on top.
11:10am. Hmmm, I never got the bias correction in the original Adam.
But since they are constants, I can imagine ignoring the e term and folding that sort of thing into the learning rate. It is not a big deal.
Let me take a look at the MABN paper next.
12:15pm. I don't get the two batch norm papers. The both involve hacking the backward pass, and I am not even sure where the update is in the MABN paper as everything is so crammed together interspersed with equations and theorems.
The Adabelief paper is exceptional. Adam did not quite sell me on it, but Adabelief will do the trick.
At the very least, thanks to the Powernorm paper, I understand the problems involved in batch norm better.
https://www.youtube.com/watch?v=oGH7dmwvuaY AdaBelief Optimizer: Theory and Practical Guidelines
I do not feel like watching this now.
Let me have breakfast here.
12:25pm. I'll watch some of the Lex interviews. I still haven't finished the one by Tegmark.
1:05pm. Done with breakfast and the Tegmark interview. I guess I'll go for the one by Litman next. Forget batch norm.
Right now I am looking into data dependent initialization papers.
1:10pm. Enough of this. Just discovering Adabelief was a benefit enough in itself in the past week. Deciding to normalize the gradients via getting the norm from a large batch directly or a moving average is also great. I'll also keep track of the norm of the inputs as well. I'll combine those two optimization tricks.
Forget thinking about the cases like RNNs with small batch sizes or convolutional nets. I do not need a perfect technique here.
I've decided that I won't go for the replay buffer normalization trick after all. What I've realized is that if I have perfect reward scaling, and my trick will in fact give that to me, I won't need to collect a whole replay buffer and turn everything into signals. I can just collect 1-2k samples and eval them all in place.
This is something even KFAC can't give me because the covariance matrix cannot be perfectly inverted. There are advantages to doing less instead of more.
Let me watch the interview by Littman.
He is the guy who made that Udacity RL course with Isbell. The course was well made, but I have it a scathing review years ago because it barely covered deep learning and was mostly a philosophy course in the end. It did not even cover CFR.
Nonetheless I am curious as to what the recent events are by a RL researcher.
2:05pm. Ah, right. I said I would not sample, but how am I going to reweight by the reward probabilities in that case?
Yeah, I forgot about that. I'll need the buffer after all. Regardless, everything else will hold.
I am going to change the way Adabelief updates the mean variance. That thing has those epsilons everywhere. Instead of that, I'll pick a lower and upper bound and stick to it.
Regardless of the data, it is not wise to make moves more that 100x for example. So I'll clip the variance updates to between [1/100^2,100^2]. Forget messing with that crappy epsilon.
On top of the weight rescaling I'll be doing using the L1 norm, this will be enough for everything. The actual bounds should be related to the length of the moving average window, but is is not wise to go beyond a certain point regardless of what the data says.
2:15pm. https://youtu.be/c9AbECvRt20?t=350
And then I had kids and I stopped listening to music and I've started to realize that my musical taste has sort of frozen out. And so I decided in 2018 I think to start listening to the top 10 billboard songs each week. So I'd be on the treadmill and I listen to that week's top 10 songs so I could find out what was popular now. And what I've discovered that I had no musical taste whatsoever. I like what I am familiar with. So the first time I'd hear a song, the first week it was on the chart, I'd be like 'Ugh'. And the second week I was into it a little bit. And the third week, I was loving it. And the fourth week it was just part of me.
A NPC doing his NPC signaling. I suppose my life would be easier if I was like that, but conformism is advantage in pursuing one's own path.
https://www.reddit.com/r/MachineLearning/comments/hciw10/r_wolfenstein_and_doom_guy_upscaled_into/
I am checking out the top rated posts in the last year and stumbled upon this.
https://arxiv.org/abs/2011.02150 EAdam Optimizer - How ε Impact Adam
Let me take a look at this paper.
...Trivial paper. Nevermind it. I was wondering why Adabelief adds the e parameter to e and then added that epsilon again, but this paper is a point that they are doing the right thing.
Now Google is confusing me. It says that Adabelief has 15 citations, but in the past I could look at those links. Where is Google Schoolar?
Some adversarial generation. Nevermind.
I did not see the handle that it is Rich Sutton endorsing this course.
There are a bunch of links, but nothing good like Adabelief.
https://www.reddit.com/r/MachineLearning/comments/i4ko0u/r_hopfield_networks_is_all_you_need/ Hopfield Networks is All You Need
Here a paper that I should save. The crap on the ML sub is just gossip like that Hinton thread. I am just wasting my time here.
95 pages. Wow.
3:05pm. It really was a complete accident that I found about Adabelief.
https://arxiv.org/abs/1910.06764 Stabilizing Transformers for Reinforcement Learning
Another interesting paper. Let me save it for later reading. At this point I am below 200 upvotes and had enough of digging through trash. This one above is a Deepmind paper so even if I had missed it now, I'd have found it at some point. Let me watch the Adabelief talk and then I'll get back to the interviews.
https://www.youtube.com/watch?v=oGH7dmwvuaY AdaBelief Optimizer: Theory and Practical Guidelines
It is only 23m.
https://youtu.be/oGH7dmwvuaY?t=20
Hmmm, SGD is not good for training GANs? I recall it being said in one of the Deepmind lectures that KFAC did well on that.
https://youtu.be/oGH7dmwvuaY?t=58
This is different than in the paper. Do I maybe have an out of date version?
https://arxiv.org/pdf/2010.07468v1.pdf
No, it is the most up to date, but it seems the update went through some changes regarding that epsilon.
https://youtu.be/oGH7dmwvuaY?t=666
He talks about epsilons here.
https://youtu.be/oGH7dmwvuaY?t=784
What is this True = False
here?
3:45pm. https://youtu.be/oGH7dmwvuaY?t=1084
If it was the old me, I'd completely ignore the role of epsilon, but they are constantly switching it around in these papers.
This just goes to show that I should be combining my own and this idea.
The reason why these large eps are necessary is because of bad conditioning, but if I had a bit extra modulation to squeeze the gradient signals towards where they should be I'll be able to fin this.
Everything is set.
3:55pm. I am going to modify Adabelief so it uses the L1 norm instead of the variance. Variance is not appropriate for this kind of task of estimating the centered mean. Right now it resembles an AC update and those don't need to square their accumulation.
I am thinking what would happen if you increase the gradients by a large amount and have something like 2 - 1 vs that 5 times.
Right now Adadelta would...no wait. I am on the wrong track. I think Adadelta is right since it penalizes outliers...
...No forget this line of thought, let me finish the talk.
Done. Now let me go for the talk by Litman.
I am going to finish this soon and then take a proper break for one day. After that, I'll make working on ML my freedom again.
https://www.youtube.com/watch?v=LAyZ8IYfGxQ
Here is a podcast by Isbell as well. I must have missed it yesterday.
Thankfully, they'll finish the AI risk section soon because it is putting me to sleep.
5pm. https://youtu.be/c9AbECvRt20?t=3261
Mike: So I think as humans often do, as in recent past as well, people extrapolate. It is like, oh if you can do that which is obviously very hard. Then obviously you could do all these other problems that we wanna solve, that we know are also really hard. And it turned out very few of them ended up being practical. Partly because I think...neural nets, certainly at the time were struggling to be consistent and reliable. And so training them in a reinforcement learning setting was a bit of a mess. I had generation after generation of masters students who wanted to do value function approximation. Basically, reinforcement learning with neural nets. And over and over and over again, we were failing. We couldn't get the good results that Jerry Tessauro got.
Mike: I now believe that Jerry is a neural net whisperer. He has a particular ability to get neural networks to do things that other people would find impossible. And it is not the technology, it is technology and Jerry together.
Lex: Yes, which speaks to the role of the human expert in the process of machine learning.
Mike: Right, it is so easy, we are so drawn to the idea that's the techology that is where the power is coming from that we lose sight of the fact that we need a really good...that's just, no one would think, hey there is this really great piece of software. Here is like GNU Emacs or whatever. (Lex laughs) Um, doesn't that prove that computers are super powerful and basically gonna take over the world.
Mike: No, Stallman is great of a hacker. He was able to make the code do these amazing things. He couldn't have done it without the computer. But the computer couldn't have done it without him.
Mike: And so people discount the role of people like Jerry who have a, who have a particular set of skills..
5:55pm. https://youtu.be/c9AbECvRt20?t=4460
I am surprised to see ML saying that he read about people teaming up with computers to beat computers. I looked into that and computers + human being better than just computer was only true at the the start.
But I suppose it is plausible logically.
https://youtu.be/c9AbECvRt20?t=4569
Lex mentions that pairs are not better than individual AI players in chess as well.
7:15pm. https://youtu.be/c9AbECvRt20?t=6513
Now it comes out, I hadn't read the book but he recommended Program or Be Programmed. I can imagine what it is. Yes, everyone will need to program and much more in the future.
7:20pm. https://www.youtube.com/watch?v=uPUEq8d73JI David Silver: AlphaGo, AlphaZero, and Deep Reinforcement Learning | Lex Fridman Podcast #86
I'd like to go through this interview next, but I am too tired.
Now that I've found AdaBelief, I realize that measuring L1 norms of the gradients and inputs would be covered by the optimizer itself.
But one thing that I can see is that the optimizer because it measures only individual values would be a lot slower in adapting.
Basically, if the other layers are poorly initialized, their scale will be uniformly messed up. Just by look at a small part it will be possible to tell what the scale is. Whereas the optimizer like AdaBelief or Adam would consider each weight in isolation.
7:30pm. Hrmmmm...now that I've wrote this, I am not sure.
I mean, couldn't AdaBelief do the same thing if I ramped up the other beta factor?
No it is not the same thing, what I wrote is right. Suddenly a part of the other net changing should cause the entirety of the present net to adjust even on parts it has not seen.
That is what my own update gives me.
There are these modulation dynamics at different scales...it will work. But this is quite fascinating. My drive to do ML has definitely been rekindled.
https://paperswithcode.com/method/radam
I am just checking out some of those other methods AdaBelief compared itself against.
Thus, to reduce such variance, it is better to use smaller learning rates in the first few epochs of training - which justifies the warmup heuristic.
Such a complicated thing. AdaBelief will have the same problem as well, but my new modulation rules will take care of it. They will definitely update much faster than the AdaBelief centered variance.
https://arxiv.org/abs/2006.00719 AdaHessian - An Adaptive Second Order Optimizer for Machine Learning
Let me take a look at this. This is quite interesting.
https://www.youtube.com/results?search_query=adahessian
There is a video that compares all these. Pss, just pick Ada ... The part that says the name is cut off by the time length! Is it AdaBelief? Or AdaHessian?
https://arxiv.org/abs/2007.01547 Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
This paper really has everything.
A key insight of our comparison is that a practitioner with a new deep learning task can expect to do about equally well by taking almost any method from our benchmark and tuning it, as they would by investing the same computational resources into running a set of optimizers with their default settings and picking the winner.
8pm. The AdaHessian paper confuses me. Isn't the diagonal of the Hessian the gradient squared?
The Hessian diagonal D can be efficiently computed using the Hutchinson’s method.
What is this method?
http://blog.shakirm.com/2015/09/machine-learning-trick-of-the-day-3-hutchinsons-trick/
Ok, I see. So in the Hessian free method are they estimating D just by multiplying the gradient over time with random vectors?
8:30pm. Actually it seems the method requires a Hessian vector product. I'll give it a pass. Let me read the Crowded Valley paper.
8:45pm. 10/34. Holy crap, there is a ton of these.
9pm. AdaBelief caught my eye, but in these tests it performs slightly worse than Adam. Ehhh, who knows.
I'll go with it anyway. Since it does uses gradient centering information in its design, I am absolutely going to use it over anything else. It is what I'd expect a first order method should be doing. It does seem to be good at optimizing autoencoders. Though I might be falling for the author's story.
I'll have to take risks and go with what makes sense.
9:10pm. It is just so difficult to use my usual reasoning method to get to anything. In programming, the obsession would converge to a proof of correctness of a particular program. Here I am just spinning my mind, trying to get the smallest of advantages. And I can never find solid ground. And because of that I can never trully build my expertise.
The post human AIs will have their work cut out of them. I am not going to be able to figure this out with my meager abilities. But in some respects I already know a decent amount.
https://scholar.google.com/scholar?cites=794903835077311857&as_sdt=2005&sciodt=0,5&hl=hr
Oh, here are the actual citations. I was wonder why there were only a few before. I was looking at the wrong page.
https://openreview.net/pdf/b3d064c86ebe3e60b1df68fff70ee335af15d5af.pdf ADAMP: SLOWING DOWN THE SLOWDOWN FOR MOMENTUM OPTIMIZERS ON SCALE-INVARIANT WEIGHTS
9:40am. Another hacky update. But it does mention that LN does not make the weights scale invariant. The fix they propose is only there for BN.
I'll ignore BN in my nets since anything I am interested in is not really applicable to it. And it has not been found to work better for RL problems. ...Actually, I am not at all sure if these last two sentences are true, but things are complex enough without BN backing the backwards pass and requiring moving averages to be computed. I have my local modulation idea to go with, so I'll go with that.
AdaBelief is exactly what I needed here. With Adam, the local modulation I thought of would be somewhat superflous, but with AdaBelief the rules compose nicely.
Enough of this. Let me catch up with Frieren. Tomorrow, I'll for the interview with David Silver and the rest. I should also get started on the monthly review.
As for programming, I should be able to start once my inner tension lets up a little.
Though it seems like I've been slacking for the past week, my work day does not seem to have a start and an end point, and my nerves have been honed to razor sharpness. At some point my brain will get tired of constantly being in a state of full battle readiness, but until then I'll continue thinking and seeking out info."
Get Route 1 set for code freeze (#872)
- Change the color of the roof It looked odd before since the logos were off-center.
The blue actually works well in my opinion
-
First attempt at revamping everything on route1 We don't need the xero grunts blocking everything anymore
-
Clean up base.po somewhat
-
Fix collisions Walking on top of trees is never good
-
Battle area has been created! A variable needed to be added to remove max and his friends
-
Capitalization is king (Sorry for the small commit, it was bugging me)
Co-authored-by: ZhongQian TiaoGong [email protected]
holy mother of god and all her wacky nephews, wxwidgets and gtk are both complete shitshows
[NON-MODULAR] More drugs. (Part 2) (#5170)
- whole bunch of shit
-gives quaaludes and opium more distinct effects -adds pcp -adds thc -replaces space drugs in weed plants with thc -adds hash -adds dabs -rebalances amount of drugs you can produce
-
Update opium.dm
-
newlines
-
runtime fix
-
runtime fix attempt 2
-
amazing
-
i dont recall the variable names being this fucked
-
I FUCKING HATE GIT
-
please so help me god