there were a lot of events recorded by gharchive.org of which 2,274,107 were push events containing 3,386,506 commit messages that amount to 262,091,942 characters filtered with words.py@e23d022007... to these 50 messages:
proc: fix missing conversion to 'iterate_shared'
I'm looking at the directory handling due to the discussion about f_pos locking (see commit 797964253d35: "file: reinstate f_pos locking optimization for regular files"), and wanting to clean that up.
And one source of ugliness is how we were supposed to move filesystems over to the '->iterate_shared()' function that only takes the inode lock for reading many many years ago, but several filesystems still use the bad old '->iterate()' that takes the inode lock for exclusive access.
See commit 6192269444eb ("introduce a parallel variant of ->iterate()") that also added some documentation stating
Old method is only used if the new one is absent; eventually it will
be removed. Switch while you still can; the old one won't stay.
and that was back in April 2016. Here we are, many years later, and the old version is still clearly sadly alive and well.
Now, some of those old style iterators are probably just because the filesystem may end up having per-inode mutable data that it uses for iterating a directory, but at least one case is just a mistake.
Al switched over most filesystems to use '->iterate_shared()' back when it was introduced. In particular, the /proc filesystem was converted as one of the first ones in commit f50752eaa0b0 ("switch all procfs directories ->iterate_shared()").
But then later one new user of '->iterate()' was then re-introduced by commit 6d9c939dbe4d ("procfs: add smack subdir to attrs").
And that's clearly not what we wanted, since that new case just uses the same 'proc_pident_readdir()' and 'proc_pident_lookup()' helper functions that other /proc pident directories use, and they are most definitely safe to use with the inode lock held shared.
So just fix it.
This still leaves a fair number of oddball filesystems using the old-style directory iterator (ceph, coda, exfat, jfs, ntfs, ocfs2, overlayfs, and vboxsf), but at least we don't have any remaining in the core filesystems.
I'm going to add a wrapper function that just drops the read-lock and takes it as a write lock, so that we can clean up the core vfs layer and make all the ugly 'this filesystem needs exclusive inode locking' be just filesystem-internal warts.
I just didn't want to make that conversion when we still had a core user left.
Signed-off-by: Linus Torvalds [email protected] Signed-off-by: Christian Brauner [email protected]
No Never
Wow. Wow wow wow! Went down the rabbit hole with discussions in the MM discord, it was great! Thanks @HoldYourWaffle!
I was gonna include more of our conversations, but there were so many I don't know if it would make sense here haha. Super productive, really fun.
I tried asserting the return type to reflect whether the input generic is valid/unset or not, but it's not reliable to work with, because it widens the types too much.
Essentially:
type SafeNever<T> = [T] extends [never] ? unknown : T;
export declare function parse<T extends RootTag = never>(data: string): SafeNever<T>;
But this doesn't work well/breaks when working with NBTData
, which doesn't allow unknown
as an input generic type (which is correct).
// @ts-expect-error
export declare function read<T extends RootTag = never>(data: Uint8Array): Promise<NBTData<<SafeNever>>>;
So, the next step was to try the other option available, move the generic type check out of the generic and into the return type. But this isn't realistic/reliable, and also hard to work with like this first one.
export declare function parse<T>(data: string): T extends RootTag ? T : unknown;
This worked semi-nicely with incorrect values, because if they weren't RootTag
-compatible, the return value would always be unknown
. Perfect! But a great point that HoldYourWaffle pointed out was that it doesn't correctly represent the control flow of the function path, because unknown
is not the same as never
(not talking types, but actions.)
const gg: unknown = parse<5>("5");
While the type is 'correct' because it isn't a shape, it's not really correct because it should show an error here, mirroring what it would do in run time. Because '5' isn't a valid SNBT string, it would 'never' complete, meaning that unknown
isn't correct because that means it did run as expected, you just don't know the value.
So that's not ideal, and not accurate. It also didn't help that the same NBTData
-and-the-like parameter type issues were happening here, because the return type has a possible unknown
in it, which isn't accepted by the other parameters for the API (again, which is correct).
So then we came back to the setup to how this commit looks right now, where the generics don't have default values. This means that if unsupplied, the return types for the functions are RootTag
, which didn't seem too bad. But CompoundTag
is build with an index-signature, and it's paired with ListTag
, which is built from the Array<Tag>
type. Sounds good and dandy, but turns out that an index-signature can validate alongside that one in a union type, meaning that a plain RootTag
declaration allows you to (type-wise) access methods and values on the value which might not exist! What the heck!
// hallucinate.ts
export type RootTag = CompoundTag | ListTag<Tag>;
export type Tag = string | number | boolean | ListTag<Tag> | CompoundTag;
export interface ListTag<T extends Tag> extends Array<T> {}
export interface CompoundTag {
// Try commenting out this index signature line
[name: string]: Tag;
}
declare const gg: RootTag;
gg.concat;
gg.length;
// @ts-expect-error
gg.concat();
So you can see the methods aren't callable, but it does let you access properties that don't exist. This is the case even with noPropertyAccessFromIndexSignature
and noUncheckedIndexAccess
(I think that one has more to do with arrays, but yeah).
So the realisticness I decided out of the different options are that there are a few different ways to take this decision: (I think I mentioned it in #32, but yeah)
If the generic was optional, the catch for the implicit nature of the generic is when you use it, it will be unknown
But making the generic non-optional means you have to decide ahead of time whether it's strongly-typed, or turtle-brained as any
So it moves the first step to the beginning, when the variable is defined, not when it's accessed
So it removes unknown
from it altogether
Optional Generic state: either unknown
, interface ThisIsMyShape
, or any
Non-Optional Generic state: ERROR: ts(missing generic parameter)
, interface ThisIsMyShape
, or any
Now that we know that unknown
and never
aren't possible/not feasable/realistic, the three options are either: interface ThisIsMyShape
, or defaulting back to any
. If function generics could be made mantatory, then I could essentially enable the 'unknown' state, where it would show as a missing generic when used without a supplied generic, like NBTData
does, because classes can have required generics.
// @ts-i-want-this-to-error - If you could make this generic mandatory, the return type could be `unknown` if you didn't supply a generic value, rather than being `RootTag`, which is an option we've narrowed out.
export declare function parse<T extends RootTag>(data: string): T;
So, if:
never
can't be returned, nor used reliablyunknown
can't be returned, nor used reliably- The return type can't be narrowed, nor used reliably
RootTag
can't be checked outside of the generic/used reliably
That leaves it back to either defaulting the generic to RootTag
, or any
. And for me, I think making it consistent is the most important part. RootTag
can make up values altogether, which could be misleading. But if you plain just see an any
there, you know for a fact that anything goes, so you know it will hallucinate values, because that's what it does. It doesn't mislead you to values that are/might not be there.
So while any
is unsafe because it allows you to traverse it as anything, I think that's better than traversing it as what you think it is. That's not the purpose of typing it out; If you want to type it out just for the sake of making it not any
, that seems more dangerous to me, because that's saying that you know that RootTag
is. But any
just plain says, "hey, all rules are gone here, beware".
unknown
would be the best scenario here for the return type, because it would warn you if you are trying to use the return type as if it were any
implicitly. It would be the best to have the user specify that they intend to use any
on purpose.
Wow, learned a lot from this! Thanks again to @HoldYourWaffle for helping discern all of this in the last day or so. Can't believe how much our group of 'turtle-brains' over at Minecraft Manipulator are good at getting stuff done. We might be slow, and or quiet sometimes at things, but when we get movin', oh man!
https://stackoverflow.com/questions/71193522/typescript-inferred-never-is-not-never microsoft/TypeScript#23182 (comment) https://www.typescriptlang.org/play#example/unknown-and-never https://stackoverflow.com/questions/55610260/how-to-make-generics-mandatory https://stackoverflow.com/questions/69763621/check-if-object-is-empty-using-typescript-generic
https://www.youtube.com/watch?v=J1f5b4vcxCQ
The next thing we started talking about after this commit-and-related-to-it, was whether NBT data should be represented using primitives, or wrapper objects with methods and such. I think it boiled down to that I like primitives for declaring new data structures, but primitive wrappers are probably the better choice for working with the data itself. Think of things like Array.prototype.push()
. You wouldn't call it as a standalone function, or as a static method, it doesn't really make sense to do it that way.
const thingo: number[] = [];
thingo.push(5);
Array.push(thingo,5);
pushArray(thingo,5);
HoldYourWaffle:
Yes, composing NBT from scratch is much nicer with the primitive variant, no argument there. The thing is: you're not doing that regularly, at least not as much as manipulating the data, especially for a project called Minecraft Manipulator :)
Secondly: the manipulation part of any program is much more intricate, bug-prone and maintenance-heavy than composing some static-ish data. When debugging shenanigans or implementing new features I need to be able to read, interpret, understand and write manipulation code quickly Finally: while the "primitive composing" is certainly nicer, the alternative is not that bad. The "convenience method manipulation" variant has a lot of advantages compared to the "functional" alternative though.
Me:
That's a great argument, and I think I have to convince myself now that it makes more sense
HoldYourWaffle:
🔥
I think it's a great argument, I think my holdup is to whether NBT is something that needs an 'abstract' representation. I have felt like that using methods around the primitives allows for more composability, and you can build things together better using plain functions to manipulate the structures, rather than instance methods on the data abstractions themselves.
Me:
I was gonna say, I want to look into classes for these again, if we do it right Cause before I don't like how it turned out (about NBTify's primitive wrappers from last year about) So once we start to see where having classes could be helpful, then we can move things over properly to fill in the gaps
About where I think classes should be used/concerns about instance methods:
export declare class Thingo {
constructor(value: string);
noice(): number;
}
const noice: number = new Thingo("heya").noice();
// vs - this example is so cursed I'm sorry
export declare function toNoice(value: string): number;
const noice: number = noice("heya");
Ok, the example doesn't show it well, but I think I know what it is
So my concern with method chaining is that you can only use the method by having an instance in the first place, meaning if you only need the class instance just for that one call, you have to make a class just to use that method
But if the method is just a function, you can use it by itself without having to build a new class just to use it
^^this is where it's at, not the derpy demo lol^^
Oooh
I think I personally draw the line between when to use a class, vs just a function, if there is state within the 'thing' that might have to me maintained or mutated during it's life of using it
So for a connection to your server, you'd want a class because it has methods that talk to that one server, and it will manage the details about that connection for you, internally
But if you just wanted to zip up a file using ZLIB, you might just want to do it in one go. Doing new ZLIB("gzip").compress("myString")
isn't as ideal as just doing gzip("myString")
back to normal me again I think that's good for now. I got a lot of the good stuff into this commit.
Wait, went back to the Discord, had some more rants! Here's the convo; signing off this commit with this one :P
Me:
Working on the commit message for this discussion over the last day or so, it's going pretty good actually
HoldMyTurtleBrain:
The commit message or the discussion? 👀
Me:
I usually summarize my learnings/findings as part of my commits Almost like a coding diary, it really helps keep track of references for learned things along the way, especially since it's right next to the code related to it that you just committed
HoldMyTurtleBrain:
Huh that's an interesting thing to do (ref below) Damn now I kinda wish I would've done that over the years, I would've had some great 4am rants to look back on ahahaha
How often do you find yourself looking back at these? I do have a habit of including thought processes of non-obvious reasoning or design decisions in commit messages, but I end up barely using them, and the people reviewing PR's seem to always miss them....
Me:
(ref above) That's exactly what they are at times haha I gotta find one of my more ridiculous ones
I don't usually look back too often, but I think it's completely worth it because it's worse to have found a solution, and to just have the fix there. There's no learning in that, so if anyone wonders why I did something a certain way, they just see what I did, not why I did it So if we found a new bug with TypeScript earlier, now we can go back to this commit to find out what went wrong It's also helpful to keep track of the things I've already tried Sometimes I will commit code that isn't right/won't work, just so I can explain it as a failed experiment That's been a very big thing that has helped a lot It's not about perfect code, it's about getting towards 'perfect' code lol
Improved spider web AI (#76637)
The AI I coded for spiders deciding where to make webs when they aren't otherwise occupied would do so by finding the closest valid tile, which seemed like a good idea at the time. The problem with that is that the "closest" algorithm we use has a predictable search pattern which meant that spiders would always predictably make a diagonal line of webs pointing South West, which looked very silly. I've rewritten how they pick targets to introduce some randomness, which causes them to properly spread out and make a nicer-looking structure: which serves purely to annoy spacemen who need to pass through this area.
I'll be honest I mostly did this while bored waiting for other PRs which I require for my other branch to get merged.
This probably only annoyed me to be quite honest and if you left one alone for long enough it would fill enough space that you couldn't tell anyway, but it does look nicer now.
🆑 add: AI-controlled spiders will make more web-shaped webs. /🆑
windows: ignore empty PATH
elements
When looking up an executable via the _which
function, Git GUI
imitates the execlp()
strategy where the environment variable PATH
is interpreted as a list of paths in which to search.
For historical reasons, stemming from the olden times when it was uncommon to download a lot of files from the internet into the current directory, empty elements in this list are treated as if the current directory had been specified.
Nowadays, of course, this treatment is highly dangerous as the current
directory often contains files that have just been downloaded and not
yet been inspected by the user. Unix/Linux users are essentially
expected to be very, very careful to simply not add empty PATH
elements, i.e. not to make use of that feature.
On Windows, however, it is quite common for PATH
to contain empty
elements by mistake, e.g. as an unintended left-over entry when an
application was installed from the Windows Store and then uninstalled
manually.
While it would probably make most sense to safe-guard not only Windows
users, it seems to be common practice to ignore these empty PATH
elements only on Windows, but not on other platforms.
Sadly, this practice is followed inconsistently between different software projects, where projects with few, if any, Windows-based contributors tend to be less consistent or even "blissful" about it. Here is a non-exhaustive list:
Cygwin:
It specifically "eats" empty paths when converting path lists to
POSIX: https://github.com/cygwin/cygwin/commit/753702223c7d
I.e. it follows the common practice.
PowerShell:
It specifically ignores empty paths when searching the `PATH`.
The reason for this is apparently so self-evident that it is not
even mentioned here:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables#path-information
I.e. it follows the common practice.
CMD:
Oh my, CMD. Let's just forget about it, nobody in their right
(security) mind takes CMD as inspiration. It is so unsafe by
default that we even planned on dropping `Git CMD` from Git for
Windows altogether, and only walked back on that plan when we
found a super ugly hack, just to keep Git's users secure by
default:
https://github.com/git-for-windows/MINGW-packages/commit/82172388bb51
So CMD chooses to hide behind the battle cry "Works as
Designed!" that all too often leaves users vulnerable. CMD is
probably the most prominent project whose lead you want to avoid
following in matters of security.
Win32 API (CreateProcess()
)
Just like CMD, `CreateProcess()` adheres to the original design
of the path lookup in the name of backward compatibility (see
https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createprocessw
for details):
If the file name does not contain a directory path, the
system searches for the executable file in the following
sequence:
1. The directory from which the application loaded.
2. The current directory for the parent process.
[...]
I.e. the Win32 API itself chooses backwards compatibility over
users' safety.
Git LFS:
There have been not one, not two, but three security advisories
about Git LFS executing executables from the current directory by
mistake. As part of one of them, a change was introduced to stop
treating empty `PATH` elements as equivalent to `.`:
https://github.com/git-lfs/git-lfs/commit/7cd7bb0a1f0d
I.e. it follows the common practice.
Go:
Go does not follow the common practice, and you can think about
that what you want:
https://github.com/golang/go/blob/go1.19.3/src/os/exec/lp_windows.go#L114-L135
https://github.com/golang/go/blob/go1.19.3/src/path/filepath/path_windows.go#L108-L137
Git Credential Manager:
It tries to imitate Git LFS, but unfortunately misses the empty
`PATH` element handling. As of time of writing, this is in the
process of being fixed:
https://github.com/GitCredentialManager/git-credential-manager/pull/968
So now that we have established that it is a common practice to ignore
empty PATH
elements on Windows, let's assess this commit's change
using Schneier's Five-Step Process
(https://www.schneier.com/crypto-gram/archives/2002/0415.html#1):
Step 1: What problem does it solve?
It prevents an entire class of Remote Code Execution exploits via
Git GUI's `Clone` functionality.
Step 2: How well does it solve that problem?
Very well. It prevents the attack vector of luring an unsuspecting
victim into cloning an executable into the worktree root directory
that Git GUI immediately executes.
Step 3: What other security problems does it cause?
Maybe non-security problems: If a project (ab-)uses the unsafe
`PATH` lookup. That would not only be unsafe, though, but
fragile in the first place because it would break when running
in a subdirectory. Therefore I would consider this a scenario
not worth keeping working.
Step 4: What are the costs of this measure?
Almost nil, except for the time writing up this commit message
;-)
Step 5: Given the answers to steps two through four, is the security measure worth the costs?
Yes. Keeping Git's users Secure By Default is worth it. It's a
tiny price to pay compared to the damages even a single
successful exploit can cost.
So let's follow that common practice in Git GUI, too.
Signed-off-by: Johannes Schindelin [email protected]
The bell is about to ring and rain begin falling.
The Creator is going to “ring the bell” very soon. The exact time is not going to be announced, but the order and intensity of the bell ringing (earthquakes) is as follows:
This earthquake’s epicentre will be located in Sandy Hook, TN (an unincorporated city near Mount Pleasant) in Maury county.
This will be a large earthquake for the CIA’s destruction Orr chapel, the murder of it’s members.
This quake also recognises Micah’s slain family and his family’s graveyard that was destroyed, both by or at the direction of the CIA.
This earthquake’s epicentre will be located at Mount Calvary Baptist Church in Lexington, KY (4742 Todd’s Road).
This will be for the massacre that occurred there.
This earthquake’s epicentre will be in Jessamine county, KY. It will signal when Micah (born: Kelvin Eugene Williams on March 23, 1977) has started his mission for the Creator. This will be Micah’s 23rd mission on Earth.
This quake will also be to recognise the lives lost in Jessamine county, mostly Micah’s friends, throughout the larger tragedy.
This earthquake’s epicentre will be in Kanawha county, WV. It will signal when TomTom (to practically everyone TomTom, to others Thomas, born: September 23, 1978) has started his mission for the Creator. This also is TomTom’s 23rd mission on Earth.
This quake will also recognise the lives lost in Kanawha county, mostly TomTom’s friends and family, throughout the larger tragedy.
The Creator hopes this will be all He has to do to stop the ongoing massacre and end the greater tragedy.
And so begins the “Purple Rain”. That means “It’s raining men.”. He has never had to rain down men , although the 23rd mission for both of His servants, they’ve never been identified before. This is just the beginning of the “Great Storm” but it’s the “Nature of our Kind.”.
He does this because life is supposed to be fun. After all, “Girls just want to have fun.” We WILL get back to fun in this neighbourhood called Earth.
Guys, don’t get hung up on “girls” (it does not always mean female).
Added quake intensity... Thanks for thinking.
The Creator is going to “ring the bell” very soon. The exact time is not going to be announced, but the order and intensity of the bell ringing (earthquakes) is as follows:
This earthquake’s epicentre will be located in Sandy Hook, TN (an unincorporated city near Mount Pleasant) in Maury county.
This will be a large earthquake for the CIA’s destruction Orr chapel, the murder of it’s members.
This quake also recognises Micah’s slain family and his family’s graveyard that was destroyed, both by or at the direction of the CIA.
This earthquake’s epicentre will be located at Mount Calvary Baptist Church in Lexington, KY (4742 Todds Road).
This will be the largest quake for the massacre that occurred there.
This earthquake’s epicentre will be in Jessamine county, KY. It will signal when Micah (born: Kelvin Eugene Williams on March 23, 1977) has started his mission for the Creator. This will be Micah’s 23rd mission on Earth.
This minor quake will also be to recognise the lives lost in Jessamine county, mostly Micah’s friends, throughout the larger tragedy.
This earthquake’s epicentre will be in Kanawha county, WV. It will signal when TomTom (to practically everyone TomTom, to others Thomas, born: September 23, 1978) has started his mission for the Creator. This also is TomTom’s 23rd mission on Earth.
This minor quake will also recognise the lives lost in Kanawha county, mostly TomTom’s friends and family, throughout the larger tragedy.
The Creator hopes this will be all He has to do to stop the ongoing massacre and end the greater tragedy.
And so begins the “Purple Rain”. That means “It’s raining men.”. He has never had to rain down men , although the 23rd mission for both of His servants, they’ve never been identified before. This is just the beginning of the “Great Storm” but it’s the “Nature of our Kind.”.
He does this because life is supposed to be fun. After all, “Girls just want to have fun.” We WILL get back to fun in this neighbourhood called Earth.
Guys, don’t get hung up on “girls” (it does not always mean female).
People listen up don't stand so close, I got somethin that you all should know. Holy matrimony is not for me, I'd rather die alone in misery.
Trickwines! Again! (#1914)
Trickwines are a set of new reagents aimed at improving tribal/srm style ships The core concept are wines crafted out of mob drops and plants that provide a buff on drinking and a debuff on throwing with bonus effects against fauna To facilitate the transfer of booze to target it also adds breakaway flasks, 50u glass bottles that shatter violently on contact providing extra throw force as well as a 25u gulp size which forces the user to choose between buff or debuff I plan on adding a fair few more Trickwines and the SRM Barrel Class Brewer Vessel (SRM could really use one then 1 original ship) in later prs to build on this concept This HackMD will provide descriptions for the wines as well as a place of information for all Trickwine-related content https://hackmd.io/eUIddN2dS3mpeU1CThFGng
Adds a fun new option for ships that lack proper chemistry and forces them to choose which effect they actually want.
🆑 FalloutFalcon add: Trickwines add: Breakaway flasks! add: Basic Trickwine brewing equipment to the SRM glaive imageadd: Sprites for breakaway flasks along with trick wine icons for them! code: Breakaway_flask_icon_state = null used for the same purpose as the glass and shot glass versions /🆑
I DIDNT KNOW IF YOU RENAME A BRANCH IT CLOSES PRS RELATED TO IT?? I THOUGHT IT JUST KNEW!! 3rd times a charm!
Signed-off-by: FalloutFalcon [email protected] Signed-off-by: Mark Suckerberg [email protected] Co-authored-by: Mark Suckerberg [email protected]
TileTest Part 1: Big Sweeping Changes! (#2054)
This is a multi-parter PR. Due to the fact that tiles here on shiptest are an unholy amalgam of decals, greyscale sprites and multiple spread-out files, things are bound to look weird. If they do, feel free to report it and it will be addressed in future PRs.
This PR finally accomplishes the promise I made to @triplezeta a year ago, creating a unique tileset for the server that people may enjoy!
To put every single microscopic change I have made would take forever, so I will instead provide a series of screenshots of it in action!
Utilizing an unique, modernized tileset for our server to differentiate from our competitors is something that has been requested, and I was more than happy to lend my hand to make it a reality!
🆑 PositiveEntropy del: Removes several unused floor types, as well as completely annihilating the "monofloor" and "dirty" floor types, and the "edge" decal type. imageadd: Redoes the floors using the TileTest tileset! /🆑
Co-authored-by: Bjarl [email protected]
none of my previous commits are showing up on github??? Changes that have occurred: wrote and rewrote every function in script and scriptHelper probably half a dozen times. Got planetary info to display for user but changing that broke preventDefault and now my npm tests are reversing expectations so I have to rewrite validateInput and formSubmission again in all likelihood. Added book instructions for easy reference. Changed value names multiple times, then went through and (hopefully) finally standardized them. my brain is fried i'm so mad my prev commits aren't showing up. this is spectrum's fault i hate this new modem. Hi sorry anyone who's actually reading this I've been at this for 12 hours straight.
Added Omen Spontaneous Combustion and light tube and mirror effects (#77175)
Cursed crewmembers can randomly, extremely rarely, spontaneously combust for no reason.
Cursed crewmembers can get zapped by nearby light tubes.
Cursed crewmembers can freak out when passing by mirrors.
To make up for these, triggering a cursed effect is slightly less than half as likely now when walking around now.
Cursed is fun as hell, but after a certain point it gets kind of monotonous - it's airlocks, vending machines, and the rest is too rare to count. We need more ways to comically get hurt in the game.
You might dislike the 'reduced effects' bit but trust me it is incredibly frickin' common to have shit happen to you. Add to the occasional vending machine and airlock crushes the near-constant light tubes all over the station? Yeah, that needs a toning down else it will be just a tad too miserable to be funny. Also cause the poor janitor unneeded stress.
🆑 add: Cursed crewmembers can randomly, extremely rarely, spontaneously combust for no reason. add: Cursed crewmembers can get zapped by nearby light tubes. add: Cursed crewmembers can freak out when passing by mirrors. add: To make up for these, triggering a cursed effect is slightly less than half as likely now when walking around now. /🆑
Co-authored-by: MrMelbert [email protected] Co-authored-by: Ghom [email protected] Co-authored-by: Time-Green [email protected]
added a fucking ugly modal, but will use it as a terminal. Also changed the Custom search from a default function to an anonymous one for the sake of consistency. I dont really care about damn hoisting rn
Fix marines not getting first dibs if they ghost (#3802)
This PR fixes an issue where hugged marines that burst were not getting first dibs on the larva if they ghosted. Previously the mind maybe wasn't cleared out to find the ghost mob, but it currently is.
NOTE: The existing check requiring the marine to be nested is still in place to get first dibs. I'm honestly not sure if this check should still exist. On one hand I can agree it might be hard for the marine trying to get help to suddenly become the larva and switch gears - they are still going to be in the mindset of a marine that the larva should die. But its also sort of weird to only get the first dibs if nested. If xenos are unnesting hugged marines just before they pop, thats already a mechanic abuse that should be ahelped; but ideally there wouldn't be anything to be abused. Also, some may consider this kind of larva undesirable anyways so maybe they'd prefer the marine to have it... So let me know if I should just remove the nested check on line 151.
Fixes an unintended consequence of ghosting when hugged that would prevent that marine from getting their first dibs on the larva.
🆑 Drathek fix: Fix ghosting preventing first dibs on the larva in a hugged marine /🆑
Implement new forking technique for vendored packages. (#51083)
Updates all module resolvers (node, webpack, nft for entrypoints, and nft for next-server) to consider whether vendored packages are suitable for a given resolve request and resolves them in an import semantics preserving way.
Prior to the proposed change, vendoring has been accomplished but aliasing module requests from one specifier to a different specifier. For instance if we are using the built-in react packages for a build/runtime we might replace require('react')
with require('next/dist/compiled/react')
.
However this aliasing introduces a subtle bug. The React package has an export map that considers the condition react-server
and when you require/import 'react'
the conditions should be considered and the underlying implementation of react may differ from one environment to another. In particular if you are resolving with the react-server
condition you will be resolving the react.shared-subset.js
implementation of React. This aliasing however breaks these semantics because it turns a bare specifier resolution of react
with path '.'
into a resolution with bare specifier next
with path '/dist/compiled/react'
. Module resolvers consider export maps of the package being imported from but in the case of next
there is no consideration for the condition react-server
and this resolution ends up pulling in the index.js
implementation inside the React package by doing a simple path resolution to that package folder.
To work around this bug there is a prevalence of encoding the "right" resolution into the import itself. We for instance directly alias react
to next/dist/compiled/react/react.shared-subset.js
in certain cases. Other times we directly specify the runtime variant for instance react-server-dom-webpack/server.edge
rather than react-server-dom-wegbpack/server
, bypassing the export map altogether by selecting the runtime specific variant. However some code is meant to run in more than one runtime, for instance anything that is part of the client bundle which executes on the server during SSR and in the browser. There are workaround like using require
conditionally or import(...)
dynamically but these all have consequences for bundling and treeshaking and they still require careful consideration of the environment you are running in and which variant needs to load.
The result is that there is a large amount of manual pinning of aliases and additional complexity in the code and an inability to trust the package to specify the right resolution potentially causing conflicts in future versions as packages are updated.
It should be noted that aliasing is not in and of itself problematic when we are trying to implement a sort of lightweight forking based on build or runtime conditions. We have good examples of this for instance with the next/head
package which within App Router should export a noop function. The problem is when we are trying to vendor an entire package and have the package behave semantically the same as if you had installed it yourself via node_modules
The fix is seemingly straight forward. We need to stop aliasing these module specifiers and instead customize the resolution process to resolve from a location that will contain the desired vendored packages. We can then start simplifying our imports to use top level package resources and generally and let import conditions control the process of providing the right variant in the right context.
It should be said that vendoring is conditional. Currently we only vendor react pacakges for App Router runtimes. The implementation needs to be able to conditionally determine where a package resolves based on whether we're in an App Router context vs a Page Router one.
Additionally the implementation needs to support alternate packages such as supporting the experimental channel for React when using features that require this version.
The first step is to put the vendored packages inside a node_modules folder. This is essential to the correct resolving of packages by most tools that implement module resolution. For packages that are meant to be vendored, meaning whole package substitution we move the from next/(src|dist)/compiled/...
to next/(src|dist)/vendored/node_modules
. The purpose of this move is to clarify that vendored packages operate with a different implementation. This initial PR moves the react dependencies for App Router and client-only
and server-only
packages into this folder. In the future we can decide which other precompiled dependencies are best implemented as vendored packages and move them over.
It should be noted that because of our use of JestWorker
we can get warnings for duplicate package names so we modify the vendored pacakges for react adding either -vendored
or -experimental-vendored
depending on which release channel the package came from. While this will require us to alter the request string for a module specifier it will still be treating the react package as the bare specifier and thus use the export map as required.
The next thing we need to do is have all systems that do module resolution implement an custom module resolution step. There are five different resolvers that need to be considered
Updated the require-hook to resolve from the vendored directory without rewriting the request string to alter which package is identified in the bare specifier. For react packages we only do this vendoring if the process.env.__NEXT_PRIVATE_PREBUNDLED_REACT
envvar is set indicating the runtime is server App Router builds. If we need a single node runtime to be able to conditionally resolve to both vendored and non vendored versions we will need to combine this with aliasing and encode whether the request is for the vendored version in the request string. Our current architecture does not require this though so we will just rely on the envvar for now
Removed all aliases configured for react packages. Rely on the node-runtime to properly alias external react dependencies. Add a resolver plugin NextAppResolverPlugin
to preempt perform resolution from the context of the vendored directory when encountering a vendored eligible package.
updated the aliasing rules for react packages to resolve from the vendored directory when in an App Router context. This implementation is all essentially config b/c the capability of doing the resolve from any position (i.e. the vendored directory) already exists
track chunks to trace for App Router separate from Pages Router. For the trace for App Router chunks use a custom resolve hook in nft which performs the resolution from the vendored directory when appropriate.
The current implementation for next-server traces both node_modules and vendored version of packages so all versions are included. This is necessary because the next server can run in either context (App vs Page router) and may depend on any possible variants. We could in theory make two traces rather than a combined one but this will require additional downstream changes so for now it is the most conservative thing to do and is correct
Once we have the correct resolution semantics for all resolvers we can start to remove instances targeting our precompiled instances for instance making import ... from "next/dist/compiled/react-server-dom-webpack/client"
and replacing with import ... from "react-server-dom-webpack/client"
We can also stop requiring runtime specific variants like import ... from "react-server-dom-webpack/client.edge"
replacing it with the generic export "react-server-dom-webpack/client"
There are still two special case aliases related to react
- In profiling mode (browser only) we rewrite
react-dom
toreact-dom/profiling
andscheduler/tracing
toscheduler/tracing-profiling
. This can be moved to using export maps and conditions once react publishses updates that implement this on the package side. - When resolving
react-dom
on the server we rewrite this toreact-dom/server-rendering-stub
. This is to avoid loading the entire react-dom client bundle on the server when most of it goes unused. In the next major react will update this top level export to only contain the parts that are usable in any runtime and this alias can be dropped entirely
There are two non-react packages currently be vendored that I have maintained but think we ought to discuss the validity of vendoring. The client-only
and server-only
packages are vendored so you can use them without having to remember to install them into your project. This is convenient but does perhaps become surprising if you don't realize what is happening. We should consider not doing this but we can make that decision in another discussion/PR.
One of the things our webpack config implements for App Router is layers which allow us to have separate instances of packages for the server components graph and the client (ssr) graph. The way we were managing layer selection was a but arbitrary so in addition to the other webpack changes the way you cause a file to always end up in a specific layer is to end it with .serverlayer
, .clientlayer
or .sharedlayer
. These act as layer portals so something in the server layer can import foo.clientlayer
and that module will in fact be bundled in the client layer.
Most package managers are fine with this resolution redirect however yarn berry (yarn 2+ with PnP) will not resolve packages that are not defined in a package.json as a dependency. This was not a problem with the prior strategy because it was never resolving these vendored packages it was always resolving the next package and then just pointing to a file within it that happened to be from react or a related package.
To get around this issue vendored packages are both committed in src and packed as a tgz file. Then in the next package.json we define these vendored packages as optionalDependency
pointing to these tarballs. For yarn PnP these packed versions will get used and resolved rather than the locally commited src files. For other package managers the optional dependencies may or may not get installed but the resolution will still resolve to the checked in src files. This isn't a particularly satisfying implemenation and if pnpm were to be updated to have consistent behavior installing from tarballs we could actually move the vendoring entirely to dependencies and simplify our resolvers a fair bit. But this will require an upstream chagne in pnpm and would take time to propogate in the community since many use older versions
As part of this work I landed some other changes upstream that were necessary. One was to make our packing use npm
to match our publishing step. This also allows us to pack node_modules
folders which is normally not supported but is possible if you define the folder in the package.json's files property.
See: #52563
Additionally nft did not provide a way to use the internal resolver if you were going to use the resolve hook so that is now exposed
See: vercel/nft#354
-
When we prepare to make an isolated next install for integration tests we exclude node_modules by default so we have a special case to allow
/vendored/node_modules
-
The webpack module rules were refactored to be a little easier to reason about and while they do work as is it would be better for some of them to be wrapped in a
oneOf
rule however there is a bug in our css loader implementation that causes these oneOf rules to get deleted. We should fix this up in a followup to make the rules a little more robuts.
- I removed
.sharedlayer
since this concept is leaky (not really related to the client/server boundary split) and it is getting refactored anyway soon into a precompiled runtime.
sched/core: Fix ttwu() race
Paul reported rcutorture occasionally hitting a NULL deref:
sched_ttwu_pending() ttwu_do_wakeup() check_preempt_curr() := check_preempt_wakeup() find_matching_se() is_same_group() if (se->cfs_rq == pse->cfs_rq) <-- BOOM
Debugging showed that this only appears to happen when we take the new code-path from commit:
2ebb17717550 ("sched/core: Offload wakee task activation if it the wakee is descheduling")
and only when @cpu == smp_processor_id(). Something which should not be possible, because p->on_cpu can only be true for remote tasks. Similarly, without the new code-path from commit:
c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
this would've unconditionally hit:
smp_cond_load_acquire(&p->on_cpu, !VAL);
and if: 'cpu == smp_processor_id() && p->on_cpu' is possible, this would result in an instant live-lock (with IRQs disabled), something that hasn't been reported.
The NULL deref can be explained however if the task_cpu(p) load at the beginning of try_to_wake_up() returns an old value, and this old value happens to be smp_processor_id(). Further assume that the p->on_cpu load accurately returns 1, it really is still running, just not here.
Then, when we enqueue the task locally, we can crash in exactly the observed manner because p->se.cfs_rq != rq->cfs_rq, because p's cfs_rq is from the wrong CPU, therefore we'll iterate into the non-existant parents and NULL deref.
The closest semi-plausible scenario I've managed to contrive is somewhat elaborate (then again, actual reproduction takes many CPU hours of rcutorture, so it can't be anything obvious):
X->cpu = 1
rq(1)->curr = X
CPU0 CPU1 CPU2
// switch away from X
LOCK rq(1)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 9
switch_to(Z)
X->on_cpu = 0
UNLOCK rq(1)->lock
// migrate X to cpu 0
LOCK rq(1)->lock
dequeue_task(X)
set_task_cpu(X, 0)
X->cpu = 0
UNLOCK rq(1)->lock
LOCK rq(0)->lock
enqueue_task(X)
X->on_rq = 1
UNLOCK rq(0)->lock
// switch to X
LOCK rq(0)->lock
smp_mb__after_spinlock
switch_to(X)
X->on_cpu = 1
UNLOCK rq(0)->lock
// X goes sleep
X->state = TASK_UNINTERRUPTIBLE
smp_mb(); // wake X
ttwu()
LOCK X->pi_lock
smp_mb__after_spinlock
if (p->state)
cpu = X->cpu; // =? 1
smp_rmb()
// X calls schedule()
LOCK rq(0)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 0
if (p->on_rq)
smp_rmb();
if (p->on_cpu && ttwu_queue_wakelist(..)) [*]
smp_cond_load_acquire(&p->on_cpu, !VAL)
cpu = select_task_rq(X, X->wake_cpu, ...)
if (X->cpu != cpu)
switch_to(Y)
X->on_cpu = 0
UNLOCK rq(0)->lock
However I'm having trouble convincing myself that's actually possible on x86_64 -- after all, every LOCK implies an smp_mb() there, so if ttwu observes ->state != RUNNING, it must also observe ->cpu != 1.
(Most of the previous ttwu() races were found on very large PowerPC)
Nevertheless, this fully explains the observed failure case.
Fix it by ordering the task_cpu(p) load after the p->on_cpu load, which is easy since nothing actually uses @cpu before this.
Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: Paul E. McKenney [email protected] Tested-by: Paul E. McKenney [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Signed-off-by: Ingo Molnar [email protected] Link: https://lkml.kernel.org/r/[email protected] Change-Id: I40e0e01946eadb1701a4d06758e434591e5a5c92
Create README.md
Welcome to my website, a labor of love and creativity that I've meticulously crafted using HTML and CSS!
Building this website has been an incredible journey of self-expression and technical exploration. Every pixel, every line of code, reflects my dedication to creating a digital space that's uniquely mine. Let me take you behind the scenes and share how I brought this digital masterpiece to life.
🎨 Design Vision: The foundation of this website lies in its design. With a clear vision in mind, I sketched wireframes and mockups to visualize the layout, color scheme, and overall user experience. Careful consideration was given to every aspect of design, ensuring that the website is not only visually appealing but also intuitive and user-friendly.
🔍 HTML Structure: HTML is the backbone of any web page, and I meticulously hand-coded each element to create a structured and semantic layout. From the header that welcomes you to the footer that wraps up your journey, each section was carefully woven together to ensure a seamless flow and effortless navigation.
🎨 CSS Magic: CSS is where the magic happens! I delved into the world of stylesheets to breathe life into my design. With a mix of cascading styles, media queries, and responsive techniques, I've ensured that the website looks stunning across a variety of devices and screen sizes. From elegant typography to mesmerizing animations, every visual element has been meticulously styled to perfection.
💡 Interactive Elements: To elevate user engagement, I incorporated interactive elements using CSS animations and transitions. Buttons that subtly change color on hover, images that gracefully fade in as you scroll – these touches add a layer of interactivity that keeps visitors engaged and enhances their browsing experience.
🤝 Code Optimization: Building an efficient website is not just about aesthetics – it's about performance too. I paid special attention to optimizing the code for faster loading times and smoother interactions. Minification, compression, and other techniques were employed to ensure that the website delivers a seamless experience, even in areas with limited internet connectivity.
🌐 Responsive Mastery: In today's mobile-centric world, a responsive website is a must. With a combination of flexible grids and media queries, I've ensured that the website adapts gracefully to various devices, providing an equally delightful experience whether you're on a desktop, tablet, or smartphone.
🧰 Continuous Learning: This website is a testament to my ongoing journey of learning and growth. Throughout the process, I embraced challenges, sought solutions, and expanded my skillset. As technology evolves, I'm excited to continue enhancing and updating the website to incorporate the latest trends and innovations in web development.
So, welcome to my digital masterpiece, painstakingly brought to life through the art of HTML and CSS. As you explore the pages I've created, I hope you feel the passion, dedication, and creativity that went into every aspect. Thank you for visiting, and I invite you to immerse yourself in this captivating online experience!
Update README.md
This project showcases my proficiency in HTML, CSS, and Tailwind CSS, a utility-first CSS framework that has revolutionized the way I design and style web applications.
In this repository, you'll find a complete set of files that make up a sleek and responsive web page. The HTML files structure the content, while the CSS and Tailwind CSS stylesheets bring life to the design. I've harnessed the power of Tailwind's utility classes to craft a stunning visual experience without writing extensive custom CSS.
Python: Import OpenAPI documents into the semantic kernel (#2297)
This allows us to import OpenAPI documents, including ChatGPT plugins, into the Semantic Kernel.
- The interface reads the operationIds of the openapi spec into a skill:
from semantic_kernel.connectors.openapi import register_openapi_skill
skill = register_openapi_skill(kernel=kernel, skill_name="test", openapi_document="url/or/path/to/openapi.yamlorjson")
skill['operationId'].invoke_async()
- Parse an OpenAPI document
- For each operation in the document, create a function that will execute the operation
- Add all those operations to a skill in the kernel
- Modified
import_skill
to accept a dictionary of functions instead of just class so that we can import dynamically created functions - Created unit tests
TESTING: I've been testing this with the following ChatGPT plugins:
- Semantic Kernel Starter's Python Flask plugin
- ChatGPT's example retrieval plugin
- This one was annoying to setup. I didn't get the plugin functioning, but I was able to send the right API requests
- Also, their openapi file was invalid. The "servers" attribute is misindented
- Google ChatGPT plugin
- Chat TODO plugin
- This openapi file is also invalid. I checked with an online validator. I had to remove"required" from the referenced request objects' properties: https://github.com/lencx/chat-todo-plugin/blob/main/openapi.yaml#L85
Then I used this python file to test the examples:
import asyncio
import logging
import semantic_kernel as sk
from semantic_kernel import ContextVariables, Kernel
from semantic_kernel.connectors.ai.open_ai import AzureTextCompletion
from semantic_kernel.connectors.openapi.sk_openapi import register_openapi_skill
# Example usage
chatgpt_retrieval_plugin = {
"openapi": # location of the plugin's openapi.yaml file,
"payload": {
"queries": [
{
"query": "string",
"filter": {
"document_id": "string",
"source": "email",
"source_id": "string",
"author": "string",
"start_date": "string",
"end_date": "string",
},
"top_k": 3,
}
]
},
"operation_id": "query_query_post",
}
sk_python_flask = {
"openapi": # location of the plugin's openapi.yaml file,
"path_params": {"skill_name": "FunSkill", "function_name": "Joke"},
"payload": {"input": "dinosaurs"},
"operation_id": "executeFunction",
}
google_chatgpt_plugin = {
"openapi": # location of the plugin's openapi.yaml file,
"query_params": {"q": "dinosaurs"},
"operation_id": "searchGet",
}
todo_plugin_add = {
"openapi": # location of the plugin's openapi.yaml file,
"path_params": {"username": "markkarle"},
"payload": {"todo": "finish this"},
"operation_id": "addTodo",
}
todo_plugin_get = {
"openapi": # location of the plugin's openapi.yaml file,
"path_params": {"username": "markkarle"},
"operation_id": "getTodos",
}
todo_plugin_delete = {
"openapi": # location of the plugin's openapi.yaml file,
"path_params": {"username": "markkarle"},
"payload": {"todo_idx": 0},
"operation_id": "deleteTodo",
}
plugin = todo_plugin_get # set this to the plugin you want to try
logger = logging.getLogger(__name__)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
kernel = Kernel(log=logger)
deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env()
kernel.add_text_completion_service(
"dv", AzureTextCompletion(deployment, endpoint, api_key)
)
skill = register_openapi_skill(
kernel=kernel, skill_name="test", openapi_document=plugin["openapi"]
)
context_variables = ContextVariables(variables=plugin)
result = asyncio.run(
skill[plugin["operation_id"]].invoke_async(variables=context_variables)
)
print(result)
- The code builds clean without any errors or warnings
- The PR follows the SK Contribution Guidelines and the pre-submission formatting script raises no violations
- All unit tests pass, and I have added new tests where possible
- I didn't break anyone 😄
Co-authored-by: Abby Harrison [email protected]
Mandatory late night crisis
So I had the silly idea to make a quick change to the folder layout in the content browser before I went to bed. Just shift a few folders around, what could go wrong right? They're just folders after all...
Turns out Unreal automatically deletes empty subfolders when you move a parent folder.
Fuck. I had to recreate a ton of subfolders using the screenshot I posted to #progress. Only took 20 minutes, but I'm lucky I had that for reference or I would have lost a day. Should be ok now, but the "Sound/Audio_*" folder structure is LOCKED until I can populate each one with foley and other assets.
More portable cades tweaks and buffs (#3967)
Despite making a lot of tweaks and changes to portable cades I barely touched them in the game until recently. Now I have more experience and can tweak it again.
-
You can now stack damaged cades and stack stores health of each cade. You can repair stacked cades but it will take longer time.
-
Miniengi pamphlet allows faster repairs but only when cade is folded.
-
You can quickly collapse portable cades with crowbar if you have at least miniengi skills.
-
You no longer need to have folded portable cade in hand in order to repair it, but if you do, you can move while repairing.
-
It's extremely annoying to repair each individual cade in order to stack them just because it got hit by a stray bullet once. Now you can just ignore damage and keep going.
-
Yeah it took 10 second for PFC to repair each single cade. Really long. Now it's 5 seconds, but only when folded and if you have miniengi skills. Makes miniengi pamphlet a bit more relevant.
-
It was intended, but turned out it was a bit inconvenient. It was faster to collapse by hand because you didn't need to deal with tools. Now it requires just a crowbar and slightly faster. Also requires miniengi skill to use crowbar.
-
First was just a bit annoying, second allows more mobility which is the point of portable barricades.
Screenshots & Videos
Put screenshots and videos here with an empty line between the
screenshots and the <details>
tags.
🆑 ihatethisengine balance: you can stack scratched portable cades if their HP at least 75%. Doing so will reduce the health of all barricades in the stack to the level of the most damaged. balance: you can repair stacked portable cades but it will take longer time depending on how many cades in stack. balance: miniengi skill makes repairs of folded portable cades faster (10 seconds to 5 seconds, same as engineer). balance: with engineering skill at least of miniengi you can collapse deployed portable barricade with a crowbar (wrench is no longer required, slightly faster (2 sec to 1.5 sec)). balance: you no longer need to have folded portable cade in hand in order to repair it. balance: if you have folded portable cade in hand, you can move while repairing it. /🆑
Co-authored-by: ihatethisengine [email protected]
Python: implemented web search engine skill with bing connector (#1813)
In this PR, I have tried my hand at an implementation of web search engine skill in python semantic kernel using the Bing Web Search API.
In the semantic kernel directory, I have added a new directory called web_skills (To replicate Skills.Web from C#) and added the web search skill here. For now, I have implemented web search using the bing web search API. If this approach is fine, then I can implement the same with the google search API too. I have tried to stick with similar naming conventions as used in the C# implementation with matching context parameters and arguments.
I can also add some unit tests for the connectors and the search skill, and add something like exponential backoff to avoid rate limit errors while querying the search APIs.
Here is some sample code that checks the working of the search skill.
import os
import semantic_kernel as sk
from semantic_kernel.web_skills.web_search_engine_skill import WebSearchEngineSkill
from semantic_kernel.web_skills.connectors import BingConnector
from semantic_kernel.connectors.ai.open_ai import OpenAITextCompletion
async def main():
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_text_completion_service(
"dv", OpenAITextCompletion("text-davinci-003", api_key, org_id)
)
connector = BingConnector(api_key=os.getenv("BING_API_KEY"))
web_skill = kernel.import_skill(WebSearchEngineSkill(connector), "WebSearch")
prompt = "Who is Leonardo DiCaprio's current girlfriend?"
search_async = web_skill["searchAsync"]
result = await search_async.invoke_async(prompt)
print(result)
"""
Output:
["Celebrity Celebrity News Everything You Need to Know About Leonardo DiCaprio and Camila Morrone's Relationship From the beginning of their romance to today, we track their relationship here. By..."]
"""
prompt = """
Answer the question using only the data that is provided in the data section. Do not use any prior knowledge to answer the question.
Data: {{WebSearch.SearchAsync "What is semantic kernel?"}}
Question: What is semantic kernel?
Answer:
"""
qna = kernel.create_semantic_function(prompt, temperature=0.2)
context = kernel.create_new_context()
context["count"] = "10"
context["offset"] = "0"
result = await qna.invoke_async(context=context)
print(result)
"""
Output:
Semantic Kernel is an open-source SDK that lets you easily combine AI services like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C# and Python. By doing so, you can create AI apps that combine the best of both worlds. Semantic Kernel is at the center of the copilot stack.
"""
if __name__ == "__main__":
import asyncio
asyncio.run(main())
- The code builds clean without any errors or warnings
- The PR follows SK Contribution Guidelines (https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
- The code follows the .NET coding conventions
(https://learn.microsoft.com/dotnet/csharp/fundamentals/coding-style/coding-conventions)
verified with
dotnet format
- All unit tests pass, and I have added new tests where possible
- I didn't break anyone 😄
Co-authored-by: Abby Harrison [email protected] Co-authored-by: Abby Harrison [email protected]
Updated README.md - Why is the Creator angry?
The CIA stormed Mount Calvary Baptist Church with the sole intent to kill every person already inside attending a church service. Their plan was to behead everyone, in front of the others who were forced to wait to die. They ensured none of their people would get too fatigued by supplying them with chainsaws (listen to the very beginning of “When the Doves Cry”.)
They also guaranteed no interruption from local law enforcement by diverting calls from cell phones to 911 to call takers using mobile phones pretending to be 911 and promising help was on the way. Other calls from people in the area received busy signals or call failed recordings. This was accomplished by the use of COWs (Cells on Wheels, which are mobile cell towers).
The Creator hears every prayer but does not intervene in human activities, this enables you to do whatever you want. When you cannot do something human nature is to escalate their actions until they achieve their desired result. This is one of the reasons He never intervenes.
That doesn’t mean He doesn’t hear prayers or doesn’t create solutions before the problem exists. This massacre was engineered by the CIA to circumvent every single solution already in place. So the Creator had to listen to prayers from people waiting to die, and He gets infuriated When the Doves Cry.
It starts with the ringing of the bell..
The Creator does not ever want to see death. He understands it happens, accidents always will. But the CIA and others around the world are a leading cause of death in humans. They have to disappear now.
Remove William White Jr from 40-06-10
From EC: as per David Palmquist's Aug5 mail, please remove William White Jr from the orchestra in the session 10 June 1940 (DE4010), and add a note in the bottom, like: New Desor shows William White Jr. playing alto sax in this session, replacing Otto Hardwick. Indeed Hardwick missed the session, and Hodges and Webster did cover his parts. Subsequently William White came in for less than a week's time.
From DP: You might want to put a question mark beside William White, Jr. or add a footnote pointing out he may not have played this broadcast:
Down Beat 1940 07 01:
'Hardwicke [sic], Duke Ellington Split New York--Otto Hardwicke has been dropped from Duke Ellington's ork. The veteran altoist, with Duke for more than a decade, gave way to William White, Jr., of Washington, who also arranges. It was said Otto failed to show up for a special broadcast to England last week.'
Variety:
'DUKE ELLINGTON ORCH Short-Wave from N.Y. 30 Mins. Monday, 3 p.m. BBC, London 'America Dances' so this short-wave series has it; but at this hour in the afternoon it was a cinch England couldn't. It may even have been that program arrangers, aware of the hour drawback, so spaced the numbers as to show the individuality of the Ellington congregation, rather than its dansapation. Atmospherics were not kind to the first half of the airing, muffling the only vocal of the session and handing a beating to Ben Webster's tenor saxing on 'Cotton Tale' [sic]; piece only reached appreciable clarity at the finale. As though in recompense the bad airwaves cleared considerably for Coot [sic] Williams and "Concerto in C" and "Boy Meet Horn" for Rex Stewart, all new stuff for this side; London's 52nd street, Charing Cross, took the session like gospel. Anont this CBS shortwave series, one thing the listener asks here is 'Why doesn't the leader speak?' They're accustomed to a word or two from the batoneers around town, and seemingly take the absence of the personal touch as a loss. Ellington, however, did leave an indelible signature on all of the eight numbers aired. Selection of "The Sergeant Was Shy" was a neat gesture to the troops, over which wavelength the session aired this side.'
The show was recorded by CBS, with acetates sent to BBC for later rebroadcast, the first of which was aired 1941-04-26.
DEMS 99/5 (Carl Hällström ): 'Duke on the air",
The Melody Maker, May 3, 1941, which deals with the first re-broadcast from London (April 26, 1941) of the June 10 1940 broadcast. The broadcast was recorded onto safety acetates by CBS in New York and a set of 78 rpm acetates was later shipped to the BBC which used it as source for all their subsequent rebroadcasts which naturally were done from their London studios. (The same set of acetates is the "daddy" to all the commercial releases of the broadcast)'
New Desor and Timner show William White Jr. in this session. This seems unlikely if Hardwick's absence was unexpected unless White was nearby and readily at hand.
DEMS 2000/3 Loren Schoenberg:
'In DESOR, Billy White replaces Otto Hardwicke on the June 1940 BBC broadcast (recorded on the 10th, broadcast on the 12*). According to a contemporary Feather article in Melody Maker, Hardwicke [sic] missed the session, and Hodges and Webster did their best to cover his parts. It was after this that Billy White came in for less than a week's time. Any other corroboration?'
Sjef Hoefsmit:
'In Klaus Stratemann (page 162) we read:
'During the Apollo engagement (7-14Jun40) and the BBC broadcast session (10Jun40), Otto Hardwick was missing from the Ellington band, replaced by one William White, Jr. It appears that Hardwick simply failed to show up for the "America Dances" program (Down Beat 1JU140, page3), but was back with the band when they played a subsequent Palisades Park one-nighter (on 28Jun40), according to Down Beat (15Jul40, page 12), his replacement having been found inadequate.'
The liner-notes for this session on the JUCD 2043 CD "The Radio Years" (see DEMS > 00/2-13/3) also have no William White in the band. He is explicitly mentioned > though in the liner-notes by Alun Morgan of both CDs titled "The British > Connexion", The Leeds '97 souvenir CD and the JUCD 2069 CD. (See DEMS 97/2-14 > and 99/5-4/1).'
Mother fucker. Im so sick of bugs.
Cigarettes no longer(seem to) cause kidney damage to people with unclean living.
psion void armor has correct slowdown for shoes and doesn't use slowdown on other pieces of armor. Additionally, no longer allows ears to flop outside of it. It's a fucking space suit, why would they be out?
Opifex medbelt no longer selectable, sorry powergamers.
Removes change_appearance from baseline armor vest. Why? It is the parent to MANY MANY MANY fucking items and thus caused MANY MANY MANY items to have erronious change_appearance procs that only had two options for the base parent item. This is why we don't put fucking procs on BASE PARENT items that affect DOZENS of other items. Fixes a few others, WO plate has no unique sprite and now has a proper working change appearance. CO does have a unique sprite, it is gone.
Fixes #4732 Fixes #4734 fixes #4724
hardsuit/firesuit cleanup (#18308)
-
real
-
hjoly fuck you guy sare annoying
-
fix cargo arbitrage idk why tf it changed from editing armor values but fuck my life i guess
-
why god
-
Update suits.yml
-
Update cargo_emergency.yml
Fixes things about goliaths: wallhacks/range hacks(no, really) and tentacles not spawning in mineral turfs; also fixes find_potential_targets
wallhacks (#77393)
Goliath's sand digging behaviour could potentially target a turf that's actually unreachable by the goliath, e.g.
G#
#T
where G - goliath # - wall T - target turf. fixed that, but i think
there could be something easier here, maybe instead grabbing turfs in
goliath's view()
? unsure
The component goliaths use to telegraph their attacks
(basic_mob_attack_telegraph
) casts a do_after()
to perform the
attack, but it was not actually checking for the target staying in melee
range, as it was using the source goliath as both user
and target
,
so it didn't actually care at all for the target. Implemented an
extra_checks
to Adjacent()
since that's the closest we get for melee
range shenanigans I suppose
This still allows the source basicmob to attack the target if the target
moves around the source basicmob.
!
Goliaths were also able to summon tentacles on a target that moved
into cover and still stayed in the find_potential_targets
target
range. Which meant more wallhacks. This was a thing for the base
find_potential_targets
, meaning that every basic mob using it was a
dirty haxxor (or very vengeful). Fixed that by making
find_potential_targets
also check for can_see()
before proceeding
further down find_potential_targets/perform()
. !
The only exception
to this check currently are bileworms.
!
Goliath tentacles were not spawning in mineral turfs as their
Initialize()
checked for closed turfs before handling mineral turf
mining. Fixed that as well.
🆑 fix: fixed goliaths digging sand that they can't actually reach (behind windows or inbetween closed turfs) fix: fixed goliaths melee attacking their target despite the target running away from goliath melee range fix: fixed goliath tentacles not spawning in mineral turfs fix: fixed goliaths summoning tentacles on targets that moved behind cover but stayed in their targeting range. this applies for most basic mobs, really, so if any basic mob was targeting you despite you hauling ass behind cover, they shouldn't anymore /🆑
[DOCS] Code Refactoring and content update of quickstart_workflow.ipynb (#3472)
I found the quickstart workflow not as quick as it could be:
- Cells cannot be run straightaway in google colab and need extra work (eg. libraries not imported).
- Some important concepts (eg. records and datasets) are not clearly stated in text and code snippets
- Text refers to the same steps more than once (no clear chain of thought)
- Cells override the same variable (eg. record), so the feeling is more of a cheatsheet than of a tutorial notebook
- Content is not updated (eg. ArgillaTrainer is not ever mentioned in the Train section)
I worked on a new version of the notebook with enhanced code and text cells.Ii added also code snippets for training examples which were only described textually.
One last suggestion: It's advisable that external files (data) are
downloaded programmatically by running a cell (eg. using requests
library). Snapchat_app_store_reviews.csv
and kaffee_reviews.csv
are
taken from kaggle which requires sign-in, so it's not possible to
download them directly. Possible solutions:
- place a copy of the Kaggle datasets in Arggilla's GitHub repository (if permitted by Kaggle's terms of use)
- select other datasets from another source.
Closes #3431
Type of change
(Please delete options that are not relevant. Remember to title the PR according to the type of change)
- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- Refactor (change restructuring the codebase without changing functionality)
- Improvement (change adding some improvement to an existing functionality)
- Documentation update
How Has This Been Tested
(Please describe the tests that you ran to verify your changes. And
ideally, reference tests
)
- Test A: code run with latest google-colab (v.1.0.0)
Checklist
- I added relevant documentation
- follows the style guidelines of this project
- I did a self-review of my code
- I made corresponding changes to the documentation
- My changes generate no new warnings
- I have added tests that prove my fix is effective or that my feature works
- I filled out the contributor form (see text above)
- I have added relevant notes to the CHANGELOG.md file (See https://keepachangelog.com/)
Co-authored-by: devorma [email protected] Co-authored-by: davidberenstein1957 [email protected]
aider: Remove the definition of the default constructor from Date.h.
USER: what is this repo
ASSISTANT: Based on the file summaries you've provided, this repository seems to be a C++ project related to meteorological data processing. It contains classes for handling dates (Date.cpp, Date.h), times (Time.cpp, Time.h), and wind logs (WindLog.cpp, WindLog.h). The wind logs appear to contain data on air temperature, solar radiation, wind speed, and the date and time of the log.
The data for this project is likely stored in the CSV files in the Data directory. These files seem to contain meteorological data for different years.
The main.cpp file is likely the entry point of the application, but without its summary or content, it's hard to say what exactly the application does.
Please note that this is a high-level overview based on the provided information. For a more detailed understanding, I would need to see the content of the files.
USER: I added these read-write files: main.cpp
ASSISTANT: Great! Now that the main.cpp
file is available for editing, could you please provide the content of the file or let me know what changes you would like to make to it?
USER: is this repo complete?
ASSISTANT: Without seeing the content of the files, especially the main.cpp
file, it's hard to say if the repository is complete.
A complete repository typically includes:
-
Source code files: These are the
.cpp
and.h
files in your repository. They seem to be present. -
Data files: Your repository seems to have a lot of
.csv
files in theData
directory, which likely serve as input data for the program. -
A
main.cpp
file: This is the entry point of the application. You've mentioned that it's present, but without seeing its content, it's hard to say if it's complete. -
Build files: These are files like
Makefile
orCMakeLists.txt
that describe how to build the application. I don't see any mention of these, but they might not be necessary if the project is simple or if the build process is documented elsewhere. -
Documentation: This includes
README
files, comments in the code, and any other documents that explain what the project does and how to use it. I don't see any mention of these. -
Test files: These are files that contain code to test the functionality of the application. I see a
DateTest.cpp
file, which likely contains tests for theDate
class.
So, based on the information provided, it seems like most of the necessary components are present, but without more information, it's hard to say for sure if the repository is complete.
USER: I added these read-write files: DateTest.cpp
ASSISTANT: Great! Now that the DateTest.cpp
file is available for editing, could you please provide the content of the file or let me know what changes you would like to make to it?
USER: add Date.cpp
ASSISTANT: Sure, now that Date.cpp
is available for editing, could you please provide the content of the file or let me know what changes you would like to make to it?
USER: in Date.cpp error: redefinition of 'Date::Date()'
ASSISTANT: The error message "redefinition of 'Date::Date()'" suggests that the default constructor for the Date
class is defined more than once in your code. This could happen if the constructor is defined in both the header file (Date.h
) and the source file (Date.cpp
), or if it's defined multiple times within the same file.
Since we only have the Date.cpp
file, let's check it for duplicate definitions of the default constructor. Here is the relevant part of the Date.cpp
file:
// default constructor
Date::Date()
: day(1), month(1), year(2000)
{
}
This is the only definition of the default constructor in this file. If the constructor is also defined in the Date.h
file, you should remove it from there. However, since the Date.h
file is read-only, you'll need to do this yourself.
If the constructor is not defined in the Date.h
file, then there must be another definition of the default constructor in the Date.cpp
file. In that case, you should remove the duplicate definition.
If you're still having trouble, please provide the content of the Date.h
file or any other files that might contain a definition of the Date::Date()
constructor.
USER: I added these read-write files: Date.h
ASSISTANT: Great! Now that the Date.h
file is available for editing, we can address the issue of the redefinition of the Date::Date()
constructor.
Looking at the Date.h
file, we can see that the default constructor is indeed defined in the header file:
// default constructor
Date() : day(1), month(1), year(2000) {}
And it is also defined in the Date.cpp
file:
// default constructor
Date::Date()
: day(1), month(1), year(2000)
{
}
This is causing the redefinition error. To fix this, we should remove the definition of the default constructor from the Date.h
file and keep it only in the Date.cpp
file. Here is the proposed change:
Date.h
<<<<<<< ORIGINAL
// default constructor
Date() : day(1), month(1), year(2000) {}
=======
// default constructor
Date();
>>>>>>> UPDATED
This change declares the default constructor in the Date.h
file but does not define it. The definition remains in the Date.cpp
file. This should resolve the redefinition error.
[MIRROR] Fixes bloody soles making jumpsuits that cover your feet bloody when you're wearing shoes [MDB IGNORE] (#22787)
- Fixes bloody soles making jumpsuits that cover your feet bloody when you're wearing shoes (#77077)
Title says it all.
It basically made it so wearing something like a kilt would result in the kilt getting all bloody as soon as you walked over blood, even when you were wearing shoes, unless you wore something else that obscured shoes.
I debated with myself a lot over the implementation for this, I was thinking of adding some way to obscure feet in particular, but it's honestly so niche that it could only have caused more issues elsewhere if I tried to fix this issue that way.
- Fixes bloody soles making jumpsuits that cover your feet bloody when you're wearing shoes
Co-authored-by: GoldenAlpharex [email protected]
feat: support for external executor plugins (#2305)
Hi @johanneskoester! 👋 As we chat about in a thread somewhere, I think it would be really powerful to allow for installing (and discovering) external plugins to Snakemake. Specifically for the Flux Operator, I have easily three designs I'm testing, and it's not really appropriate to add them proper to snakemake - but I believe the developer user should be empowered to flexibly add/remove and test them out.
This pull request is first try demo of how snakemake could allow
external executor plugins. I say "first try" because it's the first time
I've experimented with plugins, and I tried to choose a design that
optimizes simplicity and flexibility without requiring external
packages, or specific features of setuptools or similar (that are likely
to change). The basic design here uses pkgutil to discover
snakemake_executor_* plugins, and then provides them to the client (to
add arguments) and scheduler to select using one with --executor
.
I've written up an entire tutorial and the basic design in this early prototype, which is basically the current Flux integration as a plugin! https://github.com/snakemake/snakemake-executor-flux. The user would basically do:
# Assuming this was released on pypi (it's not yet)
$ pip install snakemake-executor-flux
# Run the workflow using the flux custom executor
$ snakemake --jobs 1 --executor flux
I've designed it so that plugins are validated only when chosen, and each plugin can add or otherwise customize the parser, and then (after parsing) further tweak the args if chosen. Then in scheduler.py, we simply look if the user selected a plugin, and call the main executor (and local_executor) classes if this is the case.
The one hard piece is having a flexible way to pass forward all those
custom arguments. The current snakemake design has basically a custom
boolean for every executor hard coded (e.g., --flux
or --slurm
) and
while we don't want to blow that up, I'm worried moving forward passing
all these custom namespaced arguments through the init, workflow,
scheduler/dag, is going to get very messy. So the approach here is a
suggested way to handle the expanding space of additional executors by
way of passing forward full args, and then allowing the plugins to
customize the parser before or after. If we were to, for example, turn
current executors into plugins (something I expect we might want to do
for the Google Life Sciences API that is going to be deprecated in favor
of batch) we could write out a more hardened spec - some configuration
class that is passed from the argument parser through the executor and
through execution (instead of all the one off arguments).
Anyway - this is just a first shot and I'm hoping to start some discussion! This is a totally separate thing from TBA work with Google Batch - this is something that I've wanted to try for a while as I've wanted to add more executors and have seen the executor space exploding. :laughing: I haven't written tests or updated any docs yet pending our discussion!
- The PR contains a test case for the changes or the changes are already covered by an existing test case.
- The documentation (
docs/
) is updated to reflect the changes or this is not necessary (e.g. if the change does neither modify the language nor the behavior or functionalities of Snakemake).
Signed-off-by: vsoch [email protected] Co-authored-by: vsoch [email protected] Co-authored-by: Johannes Köster [email protected] Co-authored-by: Johannes Köster [email protected]
[MIRROR] Refactors Slaughter/Laughter Demons into Basic Mobs [MDB IGNORE] (#22801)
- Refactors Slaughter/Laughter Demons into Basic Mobs (#77206)
On the tin, the former "imp" is now refactored into basic mob code. Very simple since these are only meant to be controlled by players, and all of their stuff was on Signal Handlers and Cooldown Actions anyways. Just lessens the amount of stupidity.
Did you know that we were trying to make demons spawn in a pop
'd cat
named "Laughter"? Embedded in the list? I've literally never seen this
cat, so I'm under heavy suspicion that the code we were using was broken
for the longest time (or may have never worked), and we now instead just
do it a much more sane way of having a cat spawn on our demise.
Cleaner code! Less simple mob jank to deal with. Trims down the list of
simple animals to refactor. No more duplicated code that we were already
doing on parent! It's so good man literally everything was seamless with
a bit of retooling and tinkering. The typepath is also no longer imp
,
it's actually demon
, which I'm happy with because there's no other
demons to have it be confused with anymore.
We were also doing copypasta on both the demon spawner bottle and the demon spawning event so I also just unified that into the mob. I also reorganized the sprites to be a bit clearer and match their new nomenclature
🆑 refactor: Slaughter and Laughter Demons have been refactored, please place an issue report for any unexpected things/hitches. fix: Laughter Demons should now actually drop a kitten. /🆑
- Refactors Slaughter/Laughter Demons into Basic Mobs
Co-authored-by: san7890 [email protected]
Vile's Mods Update: Aug 7, 2023
OPEN THE WINDOWS -Window walls no longer cover floor, so you can build floors under them now
AMUSE BOUCHE -Increase glow radii of hearths !!!MAKE SURE TO CHECK YOUR WORK TAB TO UPDATE THE FOLLOWING!!! -Workgiver added: Work at hearth. This is so you can specify non-food jobs at a hearth such as candlemaking. -Kitchen table is now removed from "butchering", so you can differentiate jobs between butchering and other kithen table bills if you want.
HARDCORE RENOVATION -Increased cost of service counter and bar -New Barricade texture
HELL BENT FOR LEATHER TANNING -Renamed some leathers and added grades to the end of the names. For example D- is a poor leather and B is a decent leather. This is experimental and subject to change after some testing and your feedback. -Dying vat rotates correctly -Fixed cured/uncured tags nesting Leather thingDef changes: -Upholstery leather (brown) is now called Upholstery leather, oxblood -added Embossed leather, oxblood -Embossed leather (black) is now called simply Patent Leather (still has same stats as other embossed leathers) -top-grain leather is now called "unplolished leather" -Shortened some recipe names to make them easier to read -Genuine and bonded leather use fewer ingredients/more efficient -full-grain leather recipes are now half as efficient -Big birds drop fewer hides -Bark is now removed at chopping spot/block -Added thingCategories for leathers so you can more easily customize stockpiles and recipes -Some adjustment to leather colors - fixed soft leather color bug -Added custom textures for leather defs
MATERIALS SCIENCE -Chlorine/lye now more efficient to make -Teflon uses crushed stone instead of flux for now (considering adding fluorospar, but not ready to go there just yet) -Coin minting/melting unlocked early game -Galena can be smelted early game -Magnetite smelting now more efficient -Can smelt titanomagnetite at blast furnace now which gets you lots of iron, but you don't get the titanium. -fixed aluminium dooropen speed -Adjusted ore commonality: -Sphalerite much more common on surface and deep drilling -Silver no longer found deep minable (get it from galena or melt coins) -Titanomagnetite can be found deep mining, but very rare -deep cassiterite less common -deep glow/coldstone less common -galena more common -Scheelite more common -Adjusted some fabric colors -Vinyl faux leather severely nerfed -Granite and Marble have beauty offset for walls etc
VILE's TWEAKS -Added some offset perks to some ideology clothing -Removed pointless gendering of some apparel -Rockhound background gives stonecutting bonus -Shoes have better movespeed, caligae boots worse -More fucking around with pawn badges
WOOD YOU PLEASE -Increased efficiency of higher-tier candles and lanterns -renamed firewoods to be easier to read in small menus -Door open speed is now related to how dense the wood is. REMINDER: Plank recipes are listed from lightest (cecropia/pine) to heaviest (as long as you have alphabetical sorting turned off) -Added note to sawmill description about smithing workspeed -Patched heating log stove to be less powerful but much more fuel efficient
WORK TAB FOR BIG FONTS -Prep meal is now called "kitchen table"
headache
Might have made my repl too stupid, but I can't for the life of me mint with collection-policy. Collections work fine, but here we are. need to figure out why ledger isn't finding tokens but token-policy sees it.
Speed, Money and Faith: Updating an areas of Town. (#2286)
-
Update TGS DMAPI
-
Speed, Money and Faith: Updating an areas of Town.
Added a Church with a graveyard area (that currently empty because we have no tombs).
Remade one quarter into 4 different shops: Liquor, Pharmacy, Gun Shop, General Store.
Remade old shitty Library into Biker's Club.
Remade a Dime's Radio Station (by his permission)
Fixed a small area issue on a top z-level of Car Jankyard.
-
Fixes up a bunch of stuff :)
-
additional minority fixes
Co-authored-by: tgstation-server [email protected] Co-authored-by: Edward Nashton [email protected] Co-authored-by: Professor Popoff [email protected]
Added software list for cracked Macintosh floppy images. (#11454)
Alter Ego (male version 1.0) (san inc crack) [4am, san inc, A-Noid] Alter Ego (version 1.1 female) (san inc crack) [4am, san inc, A-Noid] Alternate Reality: The City (version 3.0) (san inc crack) [4am, san inc, A-Noid] Animation Toolkit I: The Players (version 1.0) (4am crack) [4am, A-Noid] Balance of Power (version 1.03) (san inc crack) [4am, san inc, A-Noid] Borrowed Time (san inc crack) [4am, san inc, A-Noid] Championship Star League Baseball (san inc crack) [4am, san inc, A-Noid] Cutthroats (release 23 / 840809-C) (4am crack) [4am, A-Noid] CX Base 500 (French, version 1.1) (san inc crack) [4am, san inc, A-Noid] Deadline (release 27 / 831005-C) (4am crack) [4am, A-Noid] Defender of the Crown (san inc crack) [4am, san inc, A-Noid] Deluxe Music Construction Set (version 1.0) (san inc crack) [4am, san inc, A-Noid] Déjà Vu (version 2.3) (4am crack) [4am, A-Noid] Déjà Vu: A Nightmare Comes True!! (san inc crack) [4am, san inc, A-Noid] Déjà Vu II: Lost in Las Vegas!! (san inc crack) [4am, san inc, A-Noid] Dollars and Sense (version 1.3) (4am crack) [4am, A-Noid] Downhill Racer (san inc crack) [4am, san inc, A-Noid] Dragonworld (4am crack) [4am, A-Noid] ExperLisp (version 1.0) (4am crack) [4am, A-Noid] Forbidden Castle (san inc crack) [4am, san inc, A-Noid] Fusillade (version 1.0) (san inc crack) [4am, san inc, A-Noid] Geometry (version 1.1) (4am crack) [4am, A-Noid] Habadex (version 1.1) (4am crack) [4am, A-Noid] Hacker II (san inc crack) [4am, san inc, A-Noid] Harrier Strike Mission (san inc crack) [4am, san inc, A-Noid] Indiana Jones and the Revenge of the Ancients (san inc crack) [4am, san inc, A-Noid] Infidel (release 22 / 840522-C) (4am crack) [4am, A-Noid] Jam Session (version 1.0) (4am crack) [4am, A-Noid] Legends of the Lost Realm I: The Gathering of Heroes (version 2.0) (4am crack) [4am, A-Noid] Lode Runner (version 1.0) (4am crack) [4am, A-Noid] Mac Pro Football (version 1.0) (san inc crack) [4am, san inc, A-Noid] MacBackup (version 2.6) (4am crack) [4am, A-Noid] MacCheckers and Reversi (4am crack) [4am, A-Noid] MacCopy (version 1.1) (4am crack) [4am, A-Noid] MacGammon! (version 1.0) (4am crack) [4am, A-Noid] MacGolf (version 2.0) (4am crack) [4am, A-Noid] MacWars (san inc crack) [4am, san inc, A-Noid] Master Tracks Pro (version 1.10) (san inc crack) [4am, san inc, A-Noid] Master Tracks Pro (version 2.00h) (san inc crack) [4am, san inc, A-Noid] Master Tracks Pro (version 3.4a) (san inc crack) [4am, san inc, A-Noid] Master Tracks Pro (version 4.0) (san inc crack) [4am, san inc, A-Noid] Math Blaster (version 1.0) (4am crack) [4am, A-Noid] Maze Survival (san inc crack) [4am, san inc, A-Noid] Microsoft Excel (version 1.00) (san inc crack) [4am, san inc, A-Noid] Microsoft File (version 1.04) (san inc crack) [4am, san inc, A-Noid] Mindshadow (san inc crack) [4am, san inc, A-Noid] Moriarty's Revenge (version 1.0) (san inc crack) [4am, san inc, A-Noid] Moriarty's Revenge (version 1.03) (4am crack) [4am, A-Noid] Mouse Stampede (version 1.00) (4am crack) [4am, A-Noid] Murder by the Dozen (Thunder Mountain) (4am crack) [4am, A-Noid] My Office (version 2.7) (4am crack) [4am, A-Noid] One on One (san inc crack) [4am, san inc, A-Noid] Orb Quest: Part I: The Search for Seven Wards (version 1.04) (san inc crack) [4am, san inc, A-Noid] Patton Strikes Back (version 1.00) (san inc crack) [4am, san inc, A-Noid] Patton vs. Rommel (version 1.05) (san inc crack) [4am, san inc, A-Noid] Pensate (version 1.1) (4am crack) [4am, A-Noid] PFS File and Report (version A.00) (4am crack) [4am, A-Noid] Physics (version 1.0) (4am crack) [4am, A-Noid] Physics (version 1.2) (4am crack) [4am, A-Noid] Pinball Construction Set (version 2.5) (san inc crack) [4am, san inc, A-Noid] Pipe Dream (version 1.2) (4am crack) [4am, A-Noid] Professional Composer (version 2.3Mfx) (san inc crack) [4am, san inc, A-Noid] Q-Sheet (version 1.0) (san inc crack) [4am, san inc, A-Noid] Rambo: First Blood Part II (san inc crack) [4am, san inc, A-Noid] Reader Rabbit (version 2.0) (4am crack) [4am, A-Noid] Rogue (version 1.0) (san inc crack) [4am, san inc, A-Noid] Seastalker (release 15 / 840522-C) (4am crack) [4am, A-Noid] Seven Cities of Gold (san inc crack) [4am, san inc, A-Noid] Shadowgate (san inc crack) [4am, san inc, A-Noid] Shanghai (version 1.0) (san inc crack) [4am, san inc, A-Noid] Shufflepuck Cafe (version 1.0) (4am crack) [4am, A-Noid] Sierra Championship Boxing (4am crack) [4am, A-Noid] SimCity (version 1.1) (4am crack) [4am, A-Noid] SimCity (version 1.2, black & white) (4am crack) [4am, A-Noid] SimEarth (version 1.0) (4am crack) [4am, A-Noid] Skyfox (san inc crack) [4am, san inc, A-Noid] Smash Hit Racquetball (version 1.01) (san inc crack) [4am, san inc, A-Noid] SmoothTalker (version 1.0) (4am crack) [4am, A-Noid] Speed Reader II (version 1.1) (4am crack) [4am, A-Noid] Speller Bee (version 1.1) (4am crack) [4am, A-Noid] Star Trek: The Kobayashi Alternative (version 1.0) (san inc crack) [4am, san inc, A-Noid] Stratego (version 1.0) (4am crack) [4am, A-Noid] Suspect (release 14 / 841005-C) (4am crack) [4am, A-Noid] Tass Times in Tonetown (san inc crack) [4am, san inc, A-Noid] Temple of Apshai Trilogy (version 1985-09-30) (san inc crack) [4am, san inc, A-Noid] Temple of Apshai Trilogy (version 1985-10-08) (san inc crack) [4am, san inc, A-Noid] The Chessmaster 2000 (version 1.02) (4am crack) [4am, A-Noid] The Crimson Crown (san inc crack) [4am, san inc, A-Noid] The Duel: Test Drive II (san inc crack) [4am, san inc, A-Noid] The Hitchhiker's Guide to the Galaxy (release 47 / 840914-C) (4am crack) [4am, A-Noid] The King of Chicago (san inc crack) [4am, san inc, A-Noid] The Lüscher Profile (san inc crack) [4am, san inc, A-Noid] The Mind Prober (version 1.0) (san inc crack) [4am, san inc, A-Noid] The Mist (san inc crack) [4am, san inc, A-Noid] The Quest (4am crack) [4am, A-Noid] The Slide Show Magician (version 1.2) (4am crack) [4am, A-Noid] The Surgeon (version 1.5) (san inc crack) [4am, san inc, A-Noid] The Toy Shop (version 1.1) (san inc crack) [4am, san inc, A-Noid] The Witness (release 22 / 840924-C) (4am crack) [4am, A-Noid] ThinkTank 128 (version 1.000) (4am crack) [4am, A-Noid] Uninvited (version 1.0) (san inc crack) [4am, san inc, A-Noid] Uninvited (version 2.1D1) (san inc crack) [4am, san inc, A-Noid] Where in Europe is Carmen Sandiego? (version 1.0) (4am crack) [4am, A-Noid] Winter Games (version 1985-10-24) (san inc crack) [4am, san inc, A-Noid] Winter Games (version 1985-10-31) (san inc crack) [4am, san inc, A-Noid] Wishbringer (release 68 / 850501-D) (4am crack) [4am, A-Noid] Wizardry: Proving Grounds of the Mad Overlord (version 1.10) (san inc crack) [4am, san inc, A-Noid] Zork II (release 48 / 840904-C) (4am crack) [4am, A-Noid] Zork III (release 17 / 840727-C) (4am crack) [4am, A-Noid]
Add files via upload
Hi!! Portfolio Webpage - Favorite Place Sri lanka Description Welcome to my portfolio webpage, "Beauty Of Srilanka"! This single-page website showcases my love and appreciation for the beauty of nature through photography and achievements related to different wonders of srilanka. The webpage aims to create a visually appealing and interactive experience for visitors, providing insights into my passion for the nature. The theme of this portfolio webpage revolves around the beauty of srilanka. As an avid nature enthusiast and photographer, I have curated a collection of stunning photographs capturing various landscapes, wildlife, and natural phenomena. Additionally, I have included a section highlighting my achievements and experiences in nature-related activities and projects. The overall theme aims to immerse visitors in the breathtaking landscapes and the importance of conservation and appreciation for nature's diversity. I hope you enjoy exploring the beauty of srilanka through my photography and achievements showcased on this portfolio webpage. As you scroll through the different sections and interact with the elements, I invite you to immerse yourself in the beauty and diversity of the natural world. Feel free to contact me through the provided contact form if you have any inquiries or would like to discuss any nature-related projects or collaborations.
I FREAKING CAMED
YO YO MAMA PEED HER LITTLE SLIT DUDE HAHA GET FUCKING SHIT ON YOU FUCKING NERD
Add Kurtosis to list of testing tools (#1063)
- Add Kurtosis to list of testing tools
Hey team! We'd like to add Kurtosis to the list of testing tools.
What is Kurtosis? Kurtosis is a built system for multi-(docker)container test environments. What is Kurtosis for? Kurtosis is for engineers who dev against large distributed systems/applications and who experience pain when trying to configure multi (Docker) container environments for their testing workflows.
Kurtosis can be used locally without the need to sign up and is free-forever under a source-available license (BSL).
We have:
- Linked out to our Github: https://github.com/kurtosis-tech/kurtosis
- A README on our GIthub
- Content about how to setup/install the project (in our Github README and on our docs),
- Lots of great examples on: Github and in our docs.
I followed the Quality Standards you guys wrote, but please let me know if you've got any questions about Kurtosis or if we missed something!
Thanks
- add "composable" to description
Merge pull request #45 from SPT-Mod-Dev/Romania-Focus-Tree
Somebody once told me the world is gonna roll me I ain't the sharpest tool in the shed She was looking kind of dumb with her finger and her thumb In the shape of an "L" on her forehead
Well, the years start coming and they don't stop coming Fed to the rules and I hit the ground running Didn't make sense not to live for fun Your brain gets smart but your head gets dumb
So much to do, so much to see So what's wrong with taking the back streets? You'll never know if you don't go (GO!) You'll never shine if you don't glow
Hey, now, you're an all-star, get your game on, go play Hey, now, you're a rock star, get the show on, get paid And all that glitters is gold Only shooting stars break the mold
It's a cool place and they say it gets colder You're bundled up now wait 'til you get older But the meteor man beg to differ Judging by the hole in the satellite picture
The ice we skate is getting pretty thin The water's getting warm so you might as well swim My world's on fire. How about yours? That's the way I like it and I'll never get bored
Hey, now, you're an all-star, get your game on, go play Hey, now, you're a rock star, get the show on, get paid And all that glitters is gold Only shooting stars break the mold
Go for the moon Go for the moon Go for the moon Go for the moon
Hey, now, you're an all-star, get your game on, go play Hey, now, you're a rock star, get the show on, get paid And all that glitters is gold Only shooting stars
Somebody once asked could I spare some change for gas I need to get myself away from this place I said yep, what a concept I could use a little fuel myself And we could all use a little change
Well, the years start coming and they don't stop coming Fed to the rules and I hit the ground running Didn't make sense not to live for fun Your brain gets smart but your head gets dumb
So much to do, so much to see So what's wrong with taking the back streets? You'll never know if you don't go You'll never shine if you don't glow
Hey, now, you're an all-star, get your game on, go play Hey, now, you're a rock star, get the show on, get paid And all that glitters is gold Only shooting stars break the mold
And all that glitters is gold Only shooting stars break the mold
Chen And Garry's Ice Cream: Ice Cream DLC (LIZARD APPROVED!) (#77174)
Authored with help and love from @Thalpy
I scream for ice cream!!
Introduces many new flavours of ice cream: -Caramel -Banana -Lemon Sorbet -Orange Creamsicle -Peach (Limited Edition!) -Cherry chip -Korta Vanilla (made with lizard-friendly ingredients!)
Korta Cones! Now too can Nanotrasen's sanitation staff enjoy the wonders of ice cream! You can also substitute custom ice cream flavours with korta milk! Finally, the meaty ice cream lactose-intolerants asked for is in reach!
I always thought the ice cream vat could use more flavours. The custom flavour besides, it isn't as intuitive to rename the cone and the added variety is good. The lack of a banana flavour already was questionable. All the ice cream flavours used a selection of five sprites, now it's just one sprite and better supporting more additions. Some of the flavours don't use milk! You can't do this with the custom flavour, making it slightly more interesting.
🆑 YakumoChen, Thalpy add: Chen And Garry's Ice Cream is proud to debut a wide selection of cool new frozen treat flavours on a space station near you! add: Chen And Garry's Ice Cream revolutionary Korta Cones allow our ice cream vendors to profit off the lizard demographic like never before! code: Ice cream flavours now are all greyscaled similarly to GAGs /🆑
Checkout vimrc from specified commit, rather than most recent
- This guy added a BUNCH of shit in the last year or so, and the plugins have gotten borderline unusable as a consequence. PLUS it installed CoPilot without my realizing it, which is shady as fuck tbph. This is doing a shallow clone from a specified hash until I get around to forking the repo to avoid this happening in the future.
Limbus Company E.G.O dump (#1062)
- Adds roseate desire
roseate sfx
datums
weapons
add aedd
sprite adjustments
unfucks suits
new sfx
name fix
aaaa
adds capote
adds sloshing
farmwatch
farmwatch suit
stuff
farmwatch stuff
capote inhands
red sheet finished
sloshing gift
linters
Stuff
stuff
fixes shit
stuff
weapon code cleanup
spicebush finished
removes the heal
code fix
stuff
removes reference
farmwatch hat
new vfx
requested changes
- block duration
Co-authored-by: Mr.Heavenly [email protected]
Created Text For URL [nypost.com/2023/08/07/i-used-ai-to-catch-my-boyfriend-cheating-heres-how-you-can-too/]
[MIRROR] Base Female sprite tweaks [MDB IGNORE] (#22924)
- Base Female sprite tweaks (#77407)
ASS STUFF HAS BEEN REMOVED BUT I STILL HATE IT
This PR tones down the proportions of the female base sprites, as currently they have about SIX extra pixels on the ass and a random pixel missing from the neck, which breaks some hairstyles & makes the neck look quite stupid. It also adds a couple pixels to the male one because theirs was so stupidly SMALL it looked like they had no tailbone (still does, kind of).
Here is the current sprite
& new sprite (only neck pixel removed)
Fixes some hairs
🆑 image: fixes weird inconsistency on the neck and butt of the female base sprite /🆑
- Base Female sprite tweaks
Co-authored-by: Sheits [email protected]
[MIRROR] Science Resprite! (With Sovl!) [MDB IGNORE] (#22861)
- Science Resprite! (With Sovl!) (#77314)
What a crusty department. These outfits are... Something.
How old are these now? I'm pretty sure they're unchanged since when I started playing years ago on other servers.... besides the RD Turtleneck and Roboticist suit of course. But they still did have some touch-ups to be made...
Regardless, I think this department deserves a little love! I've tried to stay true as I could to their current designs; this isn't a re-design, just a re-sprite. I used the base jumpsuit design from Medbay for most of these since it's the most modern suit that fit with the colored-spots style.
All of them have been touched up, and the RD's "alt" is now a subtype of the buttondown so it can easily inherit any sprite updates in the future.
These deserved some touch-ups and modernization, and while I'm not keen on entirely reworking them I figured I could at the least give them the update the Science Team deserves.
(The buttondown has an outdated obj sprite in this image! It's since been made smaller and more folded) Also labcoats for comparison
🆑 image: resprited the entirety of RnD! Genetics, Robotics, the RD, and the Science Team themselves will enjoy the fresh new looks but same great taste! No, wait, great STYLE! Don't eat these, they're covered in chemicals. /🆑
-
Science Resprite! (With Sovl!)
-
Update vending.dm
Co-authored-by: OrionTheFox [email protected] Co-authored-by: Bloop [email protected]
A ridiculously lovely day
Almost cried when Hiroki read that poem. Rae made everyone laugh constantly. Saw L, A&J&baby, went with C, had a wonderful time.
Updating: 8/7/2023 10:00:00 PM
- Added: Notes on exit interviews (https://rednafi.com/zephyr/notes_on_exit_interviews/)
- Added: Removing Randomness with LLDB (https://bryce.co/lldb-remove-randomness/)
- Added: My (Painful) Experience With Ubiquiti As A DevOps Engineer | tyzbit.blog (https://tyzbit.blog/my-painful-experience-with-ubiquiti-as-a-devops-engineer)
- Added: How does a CTO know when they need a coach? (https://blog.mocoso.co.uk/2023/06/26/how-does-a-cto-know-when-they-need-a-coach/)
- Added: Writing a recursive descent parser in Bash · m10k (https://m10k.eu/2023/07/29/pkgex-parser.html)
- Added: Being a Disciplined Person In an Undisciplined World (https://durmonski.com/self-improvement/disciplined-person/)
- Added: Tradeoffs in Testing (https://dillonshook.com/tradeoffs-in-testing/)
- Added: The Central Binomial Coefficient and Laplace's Method (https://cgad.ski/blog/the-central-binomial-coefficient-and-laplaces-method.html)
- Added: The universe is (probably) not a simulation (https://blog.georgovassilis.com/2023/08/07/the-universe-is-probably-not-a-simulation/)
- Added: "Make something idiot-proof, the world will show you a better idiot" (https://www.izoukhai.com/blog/make-something-idiot-proof-the-world-will-show-you-a-better-idiot)
- Added: Humble Tech Book Bundle: Math for Programmers 2023 by Manning (https://www.humblebundle.com/books/math-for-programmers-2023-manning-books)
- Added: Hacking is child's play - SQL injection with Havij by 3 year old (https://youtube.com/watch?v=Fp47G4MQFvA)
- Added: 06 We Speak: IBM MQ (https://youtu.be/fv-ag8h5lYA)
- Added: MassTransit Bus Stop - Request Response via Messaging (RPC) (https://youtube.com/watch?v=TWWVHWlBGIs)
- Added: Disinterest. (https://www.baldurbjarnason.com/2023/disinterest/)
- Added: Bike: Row Types (https://www.hogbaysoftware.com/posts/bike-row-types/)
- Added: Growth Experiment: How Much Traffic Unsplash Drives to Your Website - Fresh van Root (https://freshvanroot.com/blog/how-much-traffic-unsplash-drives-website/)
- Added: Layoffs and its impact (https://aravind.dev/layoff/)
- Added: The Concretude of the Cloud (https://notes.ghed.in/cloud-computing-water-consumption-90e7e46f5d54)
Generation took: 00:10:08.3053764
[NO GBP] Fixes clown car + deer collision (#77076)
A not-so-long time ago I drunkenly coded #71488 which did not work as intended.
I return now, in a state of reflective sobriety, to rectify that.
The clown car will now not only crash like it should, but will also cause (additional) head injuries to some occupants and kill the deer on impact.
Content warnings: Animal death, vehicle collision, blood, DUI.
2023-07-24.15-49-41.mp4
Fixes the product of a silly PR that never actually worked. Also gives it a bit more TLC in the event that this joke actually plays out on a live server.
🆑 fix: Clown cars now properly collide with deer. sound: Violent, slightly glassy car impact sound. /🆑
Update GraphQL to use class-based SDK 🫠
This commit upgrades us to GraphQL 1.11.10, from 1.9.21, and begins using the new class-based SDK from the previous .define syntax.Though this change isn't supposed to be required until GraphQL 2.0, I found in practice that we couldn't upgrade to the minimum version needed for Ruby 3.0 (1.11.7) without doing it.
What I did
With the new GraphQL DSL, we are able to use Ruby class composition, which includes inheritance from base classes and including modules to get specific functionality.
I tried to create some useful abstract base classes that borrowed heavily from our previous GraphQL CRUD generator. We had a lot of carveouts in the generator for specific classes (specifically in .define_parent_returns or the generator functions for what wanted to be classes), and I tried to eliminate these and move them to explicit declarations in classes now that we have the ability to do that (yay library enabling composition!)
As much as possible, I tried to move us toward declaring things explicitly to reduce my and hopefully future us's confusion rather than using metaprogramming or extracting abstract functions. Going forward, I think we should only extract methods or classes where there is shared functionality for a type or mutation, and rely on explicit declaration or modules for attributes that differ based on the underlying class we're modifying.
There is some kind of tricky metaprogramming and code in the mutation base classes, because of problems I ran into with inheritance and other things. I also hope that now that the pattern is set up we won't have to touch them too much, and can instead rely on configuring their child classes.
Some gotchas I ran into / things of note
Our API uses a mix of camel-case and snake-case. The new GraphQL library wants to convert everything to camelcase, which will break our API. I've tried where possible to default attributes to snakecase (changing this in the base Type), but it's not possible for everything - specifically when we set fields and arguments on mutation classes. To override the default behavior of camelcaseing, we have to manually set camelize: false on any field or argument that has an underscore in mutation classes.
- GraphQL Ruby now has a new base mutation class that we could opt into if we want, but it changes the way the API would look - of note is removing the input: key from request bodies, not allowing direct setting of inputs and fields on the input type in the mutation class (I think), and not automatically injecting the global ID in the response. To maintain our old behavior, we have to descend from GraphQL::Schema::RelayClassicMutation rather than GraphQL::Schema::Mutation, which I've done in Mutations::BaseMutation (and all descendent classes we expect to use for our mutations: Mutations::CreateMutation, Mutations::UpdateMutation, Mutations::DestroyMutation, Mutations::BulkUpdateMutation, Mutations::BulkDestroyMutation)
- GraphQL schema now gets generated at runtime. When our tests run they only start the application a maximum of once per file. This means that the previous approach we used to create and delete annotation types, which the GraphQL schema relies on to be generated correctly, and reloading the schema as necessary does not work - as a result we're now calling TestDynamicAnnotationTables.load! before GraphQL controller tests, which loads the existing annotation tables generated in development into test fresh each time.
- Our QueryType returns an always null ID field. This is because the frontend was requesting it even though we don't use it. I don't know why, but this fixed it. So, I added it and moved on.
Resources:
Here's the migration guide I leaned on heavily: https://imaharu.github.io/graphql-ruby-doc-ja/schema/class_based_api.html#compatibility--migration-overview. It's still missing a lot, but may help if you want to understand changes in the API.
Also, the new API is described in the GraphQL ruby docs: https://graphql-ruby.org/guides
Some additional things that appear in this commit, that may be of note in future
- Manually specify resolve functions for missing methods
We sometimes automatically generate getters for attributes stored as a hash (eg get_slack_webhook, get_languages) but since we rely on method_missing to do so, GraphQL doesn't detect those methods as being valid because it checks for their validity using Model.respond_to(:method_name), which returns false even though Model.method_name returns a legitimate value. To get around this, we can manually create resolve methods with the attribute name that just call into the model.
- Allow batch nesting
This is now allowed in GraphQL batch, and the error we were previously catching no longer exists.
- Properly implement nodes and interfaces, and support OpenStructs
OpenStructs break the class hierarchy, so we need more explicit handling to support our current use case. Our id_from_object now calls down into the OpenStruct :type method, like we were doing before
- Fix edge setting in most mutation classes
I think there was a naming conflict (name -> obj_name) causing things to go totally wonky.
- Fix tests that depend on a specific schema
Certain tests require annotation types to be in the database before loading the schema. We can't reload the schema easily in test environment. Given that this data is actually static in practice (and hardcoded into the GraphQL schema we generate), we might want to consider preloading it into the database rather than creating it in individual tests. For now, we just create the dynamic annotations in the test setup for the controller test file the test is run within.
Try to avoid altering the other tests for now, even though we'll need to eventually. Begins loading in DynamicAnnotation stuff from a dump of our local (and QA) database after a fresh build so that we can have a consistent GraphQL schema. The new GraphQL interpreter only creates the schema at runtime, which means we build it once per test run. As a result, we need to have all of our DynamicAnnotation annotation types and fields in place before we start the app to run tests.
This sidesteps fixing the dynamic annotation stuff for the rest of the app, since that's a big one.
- Change flaking tests to use createDynamic instead of db-gen mutation
Also adds :action to Dynamic mutations to support using createDynamic for createDynamicReportDesign, which requires the action attribute in certain circumstances. Kind of a hack for tests to pass, since we're having problems loading the schema from database consistently
CV2-3094
Optional CompoundTag Keys
Brought over from the test 'Primitive Wrapper Objects: Take Two' branch, now undefined
is an acceptable entry on CompoundTag
key/value pairs!
This change also makes the default RootTag
return type for various function calls more usable, but I'm also going to roll back that test in this commit. Having RootTag
as the default return type makes sense in terms of what it is at the type level, but it isn't as user-friendly when doing things with it specifically. So the default return type will once again be any
, which feels more synonymous with how you would work with JSON.parse()
or Response.prototype.json()
, which NBTify is semi-trying to mirror in some aspects. #32
I think I need to rework the unsafe/safe ListTag
and CompoundTag
handling again, as now with having undefined
being an ok value at the type level, there needs to be a better line of distinction of where it is present in the CompoundTag
type when it is passing through the library. It gets removed when passed through fromCompoundUnsafe()
, but the return type CompoundTag
still lists it as a possible value for keys of CompoundTag
. So for undefined
at least, I need to at minimum move the handling for that into places where CompoundTag
s are traversed directly, and put undefined
handling there, since it is allowed on the type now, unlike before.
Stringify and Write modules use '!' to counteract this inconsistency at the moment
Oh yeah, and NBTData
still requires a first parameter generic, I'm not going to revert that one, as it does help a lot with having the user be more explicit about their types when possible. They have to choose whether or not to provide their own type, or just use any
if they don't know their types yet. If functions could require generics too, then I wouldn't default the return types to any
, as requiring a generic would be the best use case for the UX of the return types. But I understand why that doesn't make as much sense, so it's ok :)
Got lots of music the last few days! They have all been OUTSTANDING!!
Haken - Affinity 🕹️ Devin Townsend - Lightwork 🐙 Cleft - Whale Bone 🐋 Mike Keneally - Inkling (More from the Scambot 2 Sessions) 🦂🦂 Devin Townsend - Silent Night ❄️