there were a lot of events recorded by gharchive.org of which 2,174,712 were push events containing 3,385,969 commit messages that amount to 264,736,897 characters filtered with words.py@e23d022007... to these 61 messages:
Fixes spoon overlay not updating every time (#74687)
After bludgeoning myself one too many times with a spoon, here we are.
The spoon overlay wasn't updating to reflect that soup had been consumed, which led to trying to eat it again and then pain.
Why do spoons hurt so much?
Less spoon related injuries.
🆑 fix: spoon overlays will now update when you eat from them to reflect that food = gone. it really is gone, you can stop beating yourself with the spoon. oh god please stop-- /🆑
I FUCKING HATE VERCEL THEY CAN SUCK ON MY ENTIRE COCK GODDAMN
http: support CURLOPT_PROTOCOLS_STR
The CURLOPT_PROTOCOLS (and matching CURLOPT_REDIR_PROTOCOLS) flag was deprecated in curl 7.85.0, and using it generate compiler warnings as of curl 7.87.0. The path forward is to use CURLOPT_PROTOCOLS_STR, but we can't just do so unilaterally, as it was only introduced less than a year ago in 7.85.0.
Until that version becomes ubiquitous, we have to either disable the deprecation warning or conditionally use the "STR" variant on newer versions of libcurl. This patch switches to the new variant, which is nice for two reasons:
-
we don't have to worry that silencing curl's deprecation warnings might cause us to miss other more useful ones
-
we'd eventually want to move to the new variant anyway, so this gets us set up (albeit with some extra ugly boilerplate for the conditional)
There are a lot of ways to split up the two cases. One way would be to abstract the storage type (strbuf versus a long), how to append (strbuf_addstr vs bitwise OR), how to initialize, which CURLOPT to use, and so on. But the resulting code looks pretty magical:
GIT_CURL_PROTOCOL_TYPE allowed = GIT_CURL_PROTOCOL_TYPE_INIT; if (...http is allowed...) GIT_CURL_PROTOCOL_APPEND(&allowed, "http", CURLOPT_HTTP);
and you end up with more "#define GIT_CURL_PROTOCOL_TYPE" macros than actual code.
On the other end of the spectrum, we could just implement two separate functions, one that handles a string list and one that handles bits. But then we end up repeating our list of protocols (http, https, ftp, ftp).
This patch takes the middle ground. The run-time code is always there to handle both types, and we just choose which one to feed to curl.
Signed-off-by: Jeff King [email protected] Signed-off-by: Junio C Hamano [email protected] Signed-off-by: Johannes Schindelin [email protected]
Malfunctioning AIs get a discount on the Doomsday equipment by hacking Head of Staff APCs (#74225)
Reduces the price of the Doomsday equipment by 20 PT for each APC hacked in a Head of Staff office, as well as the Vault.
See #71404 for the prior PR and my full reasoning.
Long-story short, activating the Doomsday before having a plan in place is suicide, and it takes 13 APCs to unlock. Since there are few well hidden APCs in general, you'll usually be gathering APCs after going loud (such as with a borg machine). 13 APCs is 13 minutes, and by the time you've gathered your malfbux, you're either already dead or have already taken full control.
I had intended to give Doomsday a flat 70 PT price, but concerns were raised that an AI could conceivably hack only APCs on their sat (and perhaps one on the Lavaland outpost) and Doomsday without ever really touching the station*. So a compromise was proposed, we instead give the AI discounts by hacking Head of Staff areas, and the Vault, which are usually situated in well-traveled areas, and also have some fluff reasoning.
*I still think it'd be suicide to do this, but it's not a hill I want to die on.
🆑 balance: Malf AIs that hack Head of Staff and Vault APCs will now find a discount issued on Doomsday. /🆑
Co-authored-by: Jacquerel [email protected]
Blink is no longer a forbidden school spell?? (#74487)
Turns blink's school from forbidden to translocation. This has some incredibly minor changes nobody is going to notice:
- Changes the blink's invocations when mixed with a CERTAIN spell
- If you were very specifically a chaplain with the holy crusade sect and you casted blink, before it would excommunicate you, now it will just smite you, as translocation spells are seen as less bad than forbidden magic
- probably some more niche interactions but that's all I can remember
Guys, I know blink is a very annoying spell but come on now it's not forbidden magic, that's for heretics and super duper evil stuffs
🆑 fix: blink is now a translocation spell /🆑
Icemoon Hermit Ruin Active Turf Fix - For Real This Time (#74476)
In #74306, I thought I knew what the cause was, and I both attempted a potential fix and made tracking it easier. The fruits of my labor paid off, I know exactly what caused it now.
Basically, the demonic portal will scrape away all turfs in a 5-tile
radius on its Initialize()
, and if a spawner spawned right next to the
hermit ruin... it would count it as a mineral turf and scrape it away as
well. That's so fucking silly. At least we know now.
The fix is to just make those tiles unscrapeable, which is accomplished
via another turf_flag and filtering those out in the Initialize()
of
the demonic portals.
I also cleaned up the calls to scrapeaway being null
, which is really
weird because it just defaulted to the normal proc behavior. Naming the
arguments instead does the same thing (I checked)
[MIRROR] Makes Black Market Uplinks more easily craftable, adds them to uncommon maint loot pool [MDB IGNORE] (#20602)
- Makes Black Market Uplinks more easily craftable, adds them to uncommon maint loot pool (#74744)
Replaced the subspace amplifier in the Black Market Uplink's crafting recipe with a signaller and a microlaser.
Added the Black Market Uplink to the maintenance loot pool.
The BMU is an extremely rare device to find in rounds. It can quite literally ONLY be found via the crafting recipe, and with how stupidly bloated the crafting lists are, it isn't something many people know about. All this means that a very unique and engaging gimmick item is tragically extremely obscured.
To add to this, the recipe requires a subspace amplifier. These items are UNBELIEVABLY rare - they need you to vend them from a techfab with bluespace communication technology researched, which is fair to say is not a common thing. Sometimes maps have them in tech storage, but even then you have to break and enter which can be quite risky at times and an annoying blockade the other times.
The black market items are not worth this much hassle. They are all small cute gimmicky objects that do not heavily impact the round. By making it not only easier to craft with common items, but also appear in the maintenance loot pool, this will make assistants find out about it more often, which can further incentivize them to utilize the cargo bounty system to get enough money to buy their funny gadgets.
Another idea would be to make the uplink appear as a bounty item, which would be a great way to tell players it exists and encourage them to mix both systems together. The system for getting items is also unnecessarily, miserably awful - your item either gets literally thrown into space from a random direction, or it is teleported silently without warning in 60 seconds onto a completely random place which can very much include Security, Command, the Vault, or other high-security areas. Needing to B&E into these areas to get your durathread vest is, uh. Not worth it. However these aren't part of this PR, unless they're given the A-OK. (also maybe make it cargo purchasable?)
🆑 balance: Makes Black Market Uplinks more easily craftable, adds them to uncommon maint loot pool /🆑
- Makes Black Market Uplinks more easily craftable, adds them to uncommon maint loot pool
Co-authored-by: carlarctg [email protected]
The North Star Expeditionary Vessel - A Second Wind (#74371)
A new map for TGstation, in the works! It has 4 fucking Z levels, a massive expansive maintenance with unique designs, and some unique code features in the works.
To Do:
- Update the Map to Modern TG
- Local Tests
- Work on Map Optimizations
- Run Live Tests
Fikou has greatly helped with creating an important flavour aspect of this map, Trek Uniforms on anyone who joins! See the forum thread for more. This includes the framework for innate station traits, station traits loaded as long as it's in a map's json
Here's the forum dev thread there are screenshots there. https://tgstation13.org/phpBB/viewtopic.php?p=657252#p657252
Ckey to receive rewards: Cheshify
So, this is the North Star. An effort taking multiple mappers and of 9~ months of hard work. This map was not initially designed for TGstation, but always designed for TGstation code. The process of retooling the map for TGstation was an absolute joy and I feel like the map definitely has it's niche as a massive and unique experience for it's players.
I adore this map, it's gorgeous, has a unique aesthetic, and a number of very funny interactions with multi-Z. The PR comes packed with unique mechanics for future mappers (innate station traits!), a number of map-fitting shuttles, and a fun spacefaring uniform gimmick for the crew.
This is my second attempt at bringing this map into rotation. It was initially closed due to concerns about maptick and performance, as I wasn't willing to push for a map to be added to the repository if it didn't function to my own standards. I've been informed by a number of coders far better than I that optimizations are arriving and enroute, so I think it's time to dust her off and set sail for another journey.
Quick Disclaimer: Due to some design decisions disagreed upon by the headcoder team and myself, the map will not be featuring unique roundstart uniforms, and despite my design intentions, the innate station trait features will be shelved for now.
🆑 Cheshify, Fikou, Blue-Berry, Zytolg, InfiniteGalaxies, Striders, Sylphet, Riggle, Soal, Andry, Crit, Deranging, and Pumpkin0. add: Nanotrasen's Newest Exploratory Vessel is now available! Meet the North Star! add: More landmines, and a landmine random spawner. add: energy barriers now have a regenerative subtype, fit for permanent installations. code: Raised the number of possible level render to 4, check your preferences if needed to be reduced. /🆑
Co-authored-by: Fikou [email protected] Co-authored-by: Mothblocks [email protected]
Hologram Touchup (Init savings edition) (#74793)
Hologram generation currently involves a bunch of icon operations, which are slow. Not to mention a series of get flats for the human models, which is even worse.
We lose 0.05 seconds of init to em off just the 2 RCD holograms. it hurts man.
So instead, let's use filters and render steps to achive the same effect.
While I'm here I'll dim the holo light and make it blue, make the hologram and its beam emissive (so they glow), and do some fenangling with move_hologram() (it doesn't clear the hologram off failure anymore, instead relying on callers to do that) to ensure holocalls can't be accidentially ended by moving out of the area.
Ah and I added RESET_ALPHA to the emissive appearance flags, cause the alpha does override and fuck with color rendering, which ends up looking dumb. If we're gonna support this stuff it should be first class not accidential.
While I'm here (since holograms see static) lets ensure the static plane is always visible if you're seeing through an ai eye.
The old solution was limited to applying it to JUST ais, which isn't satisfactory for this sort of thing and missed a LOT of cases (I didn't really get how ai eyes worked before I'ma be honest)
I'm adding a signal off the hud for it detecting a change in its eye here. This is semi redundant, but avoids unneeded dupe work, so I'm ok with it.
The pipeline here is less sane then I'd like, but it works and that's enough
More pretty, better ux, static works
🆑 add: Holograms glow now, pokes at the lighting for holocalls in general a bit to make em nicer. qol: You can no longer accidentally end a holocall (as a non ai) by leaving the area. Felt like garbage fix: Fixes static rendering improperly if viewed by a non ai /🆑
Fixes Active Turf Scenario on Tramstation (#74354)
On the tin. Basically whenever atmoscilower_2.dmm
would invoked
atmoscilower_attachment_a_2.dmm
, it would trigger an active turf in
this location since it doesn't have a "ceiling". (as well as there being
an "aired" turf mingling with airless turfs)
This caused the following report:
- All that follows is a turf with an active air difference at roundstart. To clear this, make sure that all of the turfs listed below are connected to a turf with the same air contents.
- In an ideal world, this list should have enough information to help you locate the active turf(s) in question. Unfortunately, this might not be an ideal world.
- If the round is still ongoing, you can use the "Mapping -> Show roundstart AT list" verb to see exactly what active turfs were detected. Otherwise, good luck.
- Active turf: Station Asteroid (163,80,2) (/area/station/asteroid). Turf type: /turf/open/floor/plating/airless. Relevant Z-Trait(s): Station.
- Active turf: Lesser Starboard Maintenance (163,81,2) (/area/station/maintenance/starboard/lesser). Turf type: /turf/open/floor/plating. Relevant Z-Trait(s): Station.
- Active turf: Station Asteroid (164,80,2) (/area/station/asteroid). Turf type: /turf/open/floor/plating/airless. Relevant Z-Trait(s): Station.
- Active turf: Lesser Starboard Maintenance (164,81,2) (/area/station/maintenance/starboard/lesser). Turf type: /turf/open/floor/plating. Relevant Z-Trait(s): Station.
- Active turf: Station Asteroid (165,80,2) (/area/station/asteroid). Turf type: /turf/open/misc/asteroid/airless. Relevant Z-Trait(s): Station.
- Active turf: Lesser Starboard Maintenance (165,81,2) (/area/station/maintenance/starboard/lesser). Turf type: /turf/open/floor/plating. Relevant Z-Trait(s): Station.
- Active turf: Station Asteroid (166,81,2) (/area/station/asteroid). Turf type: /turf/open/floor/plating/airless. Relevant Z-Trait(s): Station.
- Active turf: Lesser Starboard Maintenance (165,83,2) (/area/station/maintenance/starboard/lesser). Turf type: /turf/open/floor/iron/smooth. Relevant Z-Trait(s): Station.
- Active turf: Station Asteroid (165,83,3) (/area/station/asteroid). Turf type: /turf/open/openspace/airless. Relevant Z-Trait(s): Station.
- Z-Level 2 has 8 active turf(s).
- Z-Level 3 has 1 active turf(s).
- Z-Level trait Station has 9 active turf(s).
- End of active turf list.
This is what it looked like when it was reproduced on my machine:
Surprisingly not that hard to debug, albeit tedious. At least I know that this was the issue with 100% confidence.
Ate up 0.1 seconds of init on my machine. That's silly.
No way players care
ADDED FUCKING COLLISION FUCK YOU YOU STUPID PIECE OF SHIT OH MY GOD
Organ Unit Tests & Bugfixes (#73026)
This PR adds a new unit test for all organs, a new unit test for lungs, and includes improvements for the existing breath and organ_set_bonus tests. Using the tests, I was able to root out bugs in the organs. This PR includes an advanced refactor of several developer-facing functions. This PR certainly represents a "quality pass" for organs which will make them easier to develop from now on.
- Fixed many fundamental bugs in organ code, especially in
Insert()
/Remove()
and their overrides. - Added two new procs to
/obj/item/organ
namedon_insert
andon_remove
, each being called afterInsert()
/Remove()
. - Added
organ_effects
lazylist to/obj/item/organ
. Convertedorgan_traits
to lazylist. 2x less empty lists per organ. - Adding
SHOULD_CALL_PARENT(TRUE)
toInsert()
/Remove()
was very beneficial to stability and overall code health. - Created unit test
organ_sanity
for all usable organs in the game. Tests insertion and removal. - Created unit test
lungs_sanity
for/obj/item/organ/internal/lungs
. - Improved
breath_sanity
unit tests with additional tests and conditions. - Improved
organ_set_bonus_sanity
unit tests with better documentation and maintainable code.
- A lot of organs are overriding
Insert()
to apply unique side-effects, but aren't checking the return value of the parent proc which causes the activation of side-effects even if the insertion technically fails. I noticed the use-case of applying "unique side-effects" is repeated across a lot of organs in the game, and by overridingInsert()
the potential for bugs is very high; I solved this problem with inversion-of-control by adding two new procs to/obj/item/organ
namedon_insert
andon_remove
, each being called afterInsert()
andRemove()
succeed. - Many organs, such as abductor "glands", cursed heart, demon heart,
alien hive-node, alien plasma-vessel, etc, were not returning their
parent's
Insert()
proc return value at all, and as a result those organsInsert()
s were always returningnull
. I have been mopping those bugs up in my last few PRs, and now the unit test reveals it all. Functions such as those in surgery expect a truthy value to be returned fromInsert()
to represent insertion success, and otherwise it force-moves the organ out of the mob. - Fixed abductor "glands" which had a hard-del bug due to their
Remove()
not calling the parent proc. - Fixed cybernetic arm implants which had a hard-del bug due to
Remove()
not resetting theirhand
variable tonull
. - Fixed lungs gas exchange implementation, which was allowing exhaled gases to feedback into the inhaled gases, which caused Humans to inhale much more gas than intended and not exhale expected gases.
- The new
organ_sanity
unit test gathers all "usable" organs in the game and tests to see if theirInsert()
andRemove()
functions behave as we expect them to. - Some organs, such as the Nightmare Brain, cause the mob's species to
change which subsequently swaps out all of their organs; the unit test
accounts for these organs via the typecache
species_changing_organs
. - Some organs are not usable in-game and can't be unit tested, so the
unit test accounts for them via the typecache
test_organ_blacklist
.
- This unit test focuses on
/obj/item/organ/internal/lungs
including Plasmaman and Ashwalker lungs. The test focuses on testing the lungs'check_breath()
proc. - The tests are composed of calling
check_breath
with different gas mixes to test breathing and suffocation. - Includes gas exchange test for inhaled/exhaled gases, such as O2 to CO2.
- Added additional tests for suffocation with empty internals, pure Nitrogen internals, and a gas-less turf.
- Includes slightly more reliable tests for internals tanks.
Organs and Lungs were mostly untested. Too many refactors have been submitted without the addition of unit tests to prove the code works at all. Time to stop. Time to get some help. Due to how bad the code health is in organs, any time we've tried to work with them some sort of bug caused them to blow up in our faces. I am trying to fix some of that by establishing some standard testing for organs. These tests have revealed and allowed me to fix lot of basic developer errors/oversights, as well as a few severe bugs.
🆑 A.C.M.O. fix: Fixed lungs gas exchange implementation, so you always inhale and exhale the correct gases. fix: Fixed a large quantity of hard-deletes which were being caused by organs and cybernetic organs. fix: Fixed many organs which were applying side-effects regardless of whether or not the insertion failed. code: Added unit tests for Organs. code: Added unit tests for Lungs. code: Improved unit tests for breathing. code: Improved unit tests for DNA Infuser organs. /🆑
[MIRROR] Moves revolution code of out of flash code, fixes April Fool conversion forcesay never working in any cirumstances [MDB IGNORE] (#20358)
- Moves revolution code of out of flash code, fixes April Fool conversion forcesay never working in any cirumstances (#74411)
-
Signallizes head revolutionary flash conversion code, moving it out of core flash code.
-
Removes "tacticool" flashing from head revs, but they can still convert from any direction
-
Fixes April Fools "You son of a bitch! I'm in" force say never working.
- Revs are muted on conversion so they couldn't talk.
- Fixed by only muting revs on non-holidays
- Cultists are unconscious on conversion so they couldn't talk
- Fixed by only unconscious-ing cultists on non-holidays
- Revs are muted on conversion so they couldn't talk.
-
Brainwash victims are more often than not unconscious / asleep so they couldn't talk - Just left this one.
-
Reduced the chance of them occurring and limits it to April Fools only
-
A 1% chance of the force says ocurring means they will happen pretty much once a week, given multiple rev / cult rounds happen every week and on average like, 20 people are converted. A little absurd, it's good that it never worked?
Antag code in core item code is bad
It's funny this meme has existed for like 2, 3 years now? No one's tested it, it's never worked
🆑 Melbert refactor: Removes Rev code from core flash code fix: Getting converted on April Fools now triggers the meme force say as always intended del: The meme force say can no longer trigger on any day (it didn't work before anyways) /🆑
- Moves revolution code of out of flash code, fixes April Fool conversion forcesay never working in any cirumstances
Co-authored-by: MrMelbert [email protected]
[MIRROR] Goliath-Infused Tendril Hammer uses an internal cooldown for the its special attack instead of a universal click cooldown [MDB IGNORE] (#20092)
- Goliath-Infused Tendril Hammer uses an internal cooldown for the its special attack instead of a universal click cooldown (#74159)
Rather than using a click cooldown, the tendril hammer instead can make its special heavy attack every 2 seconds.
In my newfound quest to try and eliminate universal click cooldowns or weird non-interactivity timers as balancing factors, this definitely is one of the biggest standout offenders. Lemme make an argument for universal click cooldowns increases being an ineffective limitation.
I'll use the problems presented by the tendril hammer to highlight some of those problems, as well as unique problems to the tendril hammer itself.
da big discussion
A) The functionality of the hammer actively inhibits all in-game handuse interaction for several seconds, without explaining this to a player. As a player, you won't know why this is happening, as universal click cooldown is not present as a UI element.
B) Since universal click cooldowns are not visible to players, it might feel more like the game is malfunctioning rather than being a deliberate mechanic. Even if click cooldowns were visible, players probably would think that the cooldown applies to the hammer, and not handuse interactivity with the game world as a whole for several seconds.
C) The functionality of the hammer could work fine as an internal cooldown on the hammer, only relevant to the hammer. This ensures that its special effects are exclusive, without the need to interrupt player interaction as a whole.
D) Since we're talking about miners. If someone is concerned about the hammer being used on the station against carbon players; you need someone to help mutate you into goliath mutant, which cannot be bypassed whatsoever. An excellent example of something similar is the chainsaw arm, created right next door to genetics in robotics, which does even more force than the arm and is sharp. With the limitations that exist, I think it probably discourages most powergaming, if that was even a realistic concern (it really isn't).
E) You lose both a hand AND your gloves slot when you get the hammer. No modsuits, no glove equipment, no two-handed equipment, and you now have to juggle everything with one hand assuming you're not on your, once again, universal click cooldown for several precious seconds. Miners live or die in their rapid response to problems. This is also the total sum of what you lose as a miner. That's a steep cost and it just doesn't justify its own value compared to what you lose.
TL;DR - There is no offset to the cost of this weapon, it is strictly a detriment because of poorly conceived implementation.
This is maybe one of the coolest ideas conceptually for the infusions so far, heavily hampered by what seems to be an intense fear of the mutation being too useful. So it was made borderline masochistic to willingly seek out and use.
I want to see this actually be useful. I can't see this with the restrictions it has. Hopefully this is enough to make it worthwhile getting.
🆑 balance: Changes the universal click cooldown of the tendril hammer from the goliath infusion into an internal cooldown just for the special heavy attack. /🆑
Co-authored-by: Jacquerel <hnevard@ gmail.com>
- Goliath-Infused Tendril Hammer uses an internal cooldown for the its special attack instead of a universal click cooldown
Co-authored-by: necromanceranne [email protected] Co-authored-by: Jacquerel <hnevard@ gmail.com>
[MIRROR] Microing var/static times (~0.015 seconds of init) [MDB IGNORE] (#20688)
- Microing var/static times (~0.015 seconds of init) (#74769)
Moth and I came up with an affront to god and man, and used it to track the time spent creating /static (and in theory /global) variables (this happens right at the start of init) They cost as a sum about 0.05 seconds btw, at least currently.
/datum/timer
var/key
/datum/timer/New(file, line)
src.key = "[file]:[line]"
/datum/timer/proc/operator*(x)
rustg_time_reset(key)
return x
/datum/timer/proc/operator+(x)
var/time = rustg_time_microseconds(key)
world.log << "TIMER: [key]: [time]"
return x
Regex:
var/static/([\w/]+) =
-> var/static/$1 = (new /datum/timer(__FILE__, __LINE__)) * (new /datum/timer(__FILE__, __LINE__)) +
Output on moth's pc looks like this, time in microseconds
Most of this is either icon_states() memes (which appears to be cached btw, that's interesting), or a variation on typecacheof() There is one get_asset_datum call, but that is ALREADY cached and so is just redundant. That's a good 0.01 seconds saved.
The rest of the time here is slightly more interesting.
The majority of typecacheof() is iterating the output of typesof(), a byond internal proc that returns a list of types that either are or are the child of the passed in type. A decent chunk of time here (0.005 seconds, or 10% of the proc) can be saved by unrolling the arguments to the proc. It takes an arbitrary amount of typepaths as input, but we can't like use arglist() here (cause this is an internal "proc"), so instead we try a window of args, passing in null if we start to try and take in too much. Window size matters, zebra fits better into 4 then 5, especially because of how grouping needs to work to make this effect happen. We save about 0.001 for zebra btw, which is around about 7%. It's lower cause we need to group the paths beforehand I think.
The speedup is minor, but it DOES exist. Plus it's fun.
Microing is a hell of a drug
- Microing var/static times (~0.015 seconds of init)
Co-authored-by: LemonInTheDark [email protected]
Healer Balance Changes (#2896)
This pull request addresses the boring and low-risk gameplay of the Healer drone, who spends half the round sitting next to recovery nodes and recovering her health so she may use it again, rinse and repeat until a rine notices said drone has purple on it and booms her.
First, by changing her health from 600 to 500, Healer can spend more time healing her sisters than sitting through another 100 health to heal herself. Though this makes her less tanky than before, healing classes are not known to be tanks. To ensure Healer can still heal five times without depleting too much of her health whilst still giving her sisters a decent amount of heals, I made her ability cost 75 health instead of 100, and also made her ability cost 200 plasma. Since Healer replenishes plasma much more quickly than her health, she can still put herself into crit if she heals too frequently. Due to this buff, her heals had a slight nerf, being 10 damage a second for ten seconds instead of 12 damage per second for ten seconds for a total of 20 less damage healed per application overall.
In addition to these changes, I'm giving Healer a better plasma transfer for when she has nobody else to heal/nowhere else to weed and she has an opportunity to assist her sisters. While a normal drone transfers 50 plasma with a delay of 20, Healer transfers 100 with a delay of 15, which is nowhere near Hivelord's gargantuan 200 plasma with a delay of 5, but it still is better than a normal drone.
Finally, to give the huggers and larva some love, Healer will specifically heal little ones 1.5 health per second for 10 seconds for 15 of her own health and 30 plasma.
Healer drone isn't fun. You run around and heal a bunch of T3s, then sit out for half the battle trying to heal that massive 600 heath while you wonder why you take so long to heal even though you have Strong pheromones. You cry to mom for help, but she doesn't have time to heal a drone who can't build walls and has no need to weed at the moment. You think, 'screw it, I'm going to make a recovery node and camp here until I heal', but by the time you finish healing, several T3s and a silly rouny just suicided into a wall of talls and destroyed your recovery node, so you run off and make another one. But oh, someone noticed you have purple on your carapace and decide your location is precisely where a shell should land, right as you're building one.
No more. These changes allow Healer to move around at her leisure and makes Healer more engaging by allowing her to be a more front-line participant and actively run around and heal her sisters without having to incur such a harsh penalty.
Let this be a testmerge, please.
🆑 Puckaboo2 balance: Healer Drone's health was reduced to 500 from 600. balance: Healer's damage has been increased to 17 from 12 and the tackle damage debuff has been halved. balance: Healer Drone's Apply Salve ability now costs 75 health and 200 plasma, down from 120 health and up from 0 plasma. balance: Healer Drone's Apply Salve ability now heals 10 damage per second for 10 seconds, down from 12 damage per second for ten seconds. balance: To prevent spam healing between Healers, Apply Salve costs 100 health instead of 75 health when Healer heals another Healer. Much healing. balance: Healer has an improved Transfer Plasma that gives 100 plasma instead of 50, with a 25% shorter delay. balance: Healer will heal huggers and larva for 1.5 health a second for 10 seconds, costing 15 health and 30 plasma. tweak: Healer will now face the xeno she is healing if she was not facing their direction before. spellcheck: All instances of VERYSMALL and VERYLARGE have been renamed to VERY_SMALL and VERY_LARGE. /🆑
Co-authored-by: Morrow [email protected]
[MIRROR] Fixes a runtime in simple_animal/hostile [MDB IGNORE] (#20588)
- Fixes a runtime in simple_animal/hostile (#74706)
Attempting to fix this flaky test that has been cropping up from the Icebox tests. It is annoying.
From what I can tell, the mob was getting qdeleted while it was doing its loop of finding a target. This can happen at any time, because many simple mobs (including the one causing the issues) get qdeleted on death.
Added some more checks to make sure we don't do certain actions if the mob gets qdeleted midway through execution of its AI routine. It really could happen anywhere so we must be vigilant.
create_and_destroy: [02:24:31] Runtime in stack_trace.dm,4: addtimer called with a callback assigned to a qdeleted object. In the future such timers will not be supported and may refuse to run or run with a 0 wait (code/controllers/subsystem/timer.dm:583)
proc name: stack trace (/proc/_stack_trace)
src: null
call stack:
stack trace("addtimer called with a callbac...", "code/controllers/subsystem/tim...", 583)
addtimer(/datum/callback (/datum/callback), 300, 8, null, "code/modules/mob/living/simple...", 595)
the demonic watcher (/mob/living/simple_animal/hostile/asteroid/ice_demon): GainPatience()
the demonic watcher (/mob/living/simple_animal/hostile/asteroid/ice_demon): GiveTarget(the mi-go (/mob/living/simple_animal/hostile/netherworld/migo))
the demonic watcher (/mob/living/simple_animal/hostile/asteroid/ice_demon): FindTarget(/list (/list))
the demonic watcher (/mob/living/simple_animal/hostile/asteroid/ice_demon): AIShouldSleep(/list (/list))
the demonic watcher (/mob/living/simple_animal/hostile/asteroid/ice_demon): handle automated action() at stack_trace.dm:4
On top of that, there is signal handling in place to LoseTarget() when a
mob that is already a target gets qdel'd and sends
COMSIG_PARENT_QDELETING
. Shown below.
However there is nothing stopping a target that is not null but that has been qdeleted from being considered as a target in the first place.
This PR just aims to fix that problem by making sure that a) a hostile ai that gets qdeleted midway through does not keep doing stuff that can cause issues and b) an atom that is being qdeleted never makes its way into the targets list of a hostile ai.
Simple mobs/AI are due for a wider refactor honestly but this really ought to be done in the meantime so we don't get spammed by CI failures over nonsense.
Fixes tgstation/tgstation#73032 Fixes tgstation/tgstation#74266 Fixes Skyrat-SS13/Skyrat-tg#18964 Fixes Skyrat-SS13/Skyrat-tg#19749 Fixes Skyrat-SS13/Skyrat-tg#18964 Fixes Skyrat-SS13/Skyrat-tg#19322 Fixes Skyrat-SS13/Skyrat-tg#18974 Fixes Skyrat-SS13/Skyrat-tg#19296 Fixes Skyrat-SS13/Skyrat-tg#19294
Bugfix, stops the icebox test from failing as much.
🆑 fix: fixes hostile mobs sometimes being able to target an atom that has been marked for deletion and then becoming confused, and in a similar vein fixes mobs sometimes still running their AI while being marked for deletion. /🆑
Co-authored-by: san7890 <the@ san7890.com>
- Fixes a runtime in simple_animal/hostile
Co-authored-by: Bloop [email protected] Co-authored-by: san7890 <the@ san7890.com>
Merge pull request #2066 from jinxynii/master
stop cheesing my fucking dungeon you god damn LOSERS
<3ThomasBS_ILoveYOU!! [ macOS : day 25 ] ca2 Stabilization and continuous integration and deployment implementation <3ThomasBS_ILoveYOU!!
<3tbs, Mummi and bilbo!!
Thomas Borregaard Sørensen \infinity,-0.16091989,\infinity ONE-MAN ABSOLUTE <3!! I love you, by ???-0.02041977-???write my history please make me please create me for you for me for you for me Camilo Sasuke Thomas Borregaard Sørensen!!
Thomas 3 private commits on mid Dec2020!!
Thomas Online YouTube VODs contribution!!
Mummi orange-rice-flour cake on 20-Dec!!
Mummi (tinytaura) watching and chatting contribution!!
bilbo sleeping and needing/requesting/crying for help care (for the right person (me), the cats wanna fight with him) contribution!!
sodapoppin and friends contribution!!
iAssyrian chatting contribution!!
boflux (Spoofh, Benjamin Kuhl) chatting contribution!!
jusg_fpga (fpga_guru, vue_equalizer, just_fpga, Oliver Pohl) chatting contribution!!
cmgriffing streaming contribution!!
TimBeaudet (Friends: FletcherLabs, tsjost and Jabokoe) streaming contribution!!
Stumpen_nicklas_dk, sodapoppin and EduardoRFS streaming contribution!!
Roxkstar74 sleeping streaming contribution!!
kissloryshy chatting contribution!!
blackjekko from Padova Italia through twitch C++/ca2 interest contribution!!
j_blow streaming contribution!!
boflux (Ben, Spoofh, from Germany) chatting contribution!!
parrot_rl chatting contribution (from New Jersey)!!
JPCdk streaming contribution!!
whyyyyyyysoserious streaming chess contribution!!
fpga_guru (vue_equalizer, Oliver from Deutsch) C++/ca2 interest contribution!!
SovereignDev with Unreal streaming contribution!!
Ash_F0x and TimBeaudet streaming contribution!!
Myrkee (Valheim) streaming contribution!!
xmetrix and EinfachUwe42 streaming contribution!!
JessicaMak and marcobrunodev streaming contribution!!
alfredotigolo, mandrakenk and Okbatgames chatting contribution!!
jitspoe, Endesga and Fearitself streaming contribution!!
jmcmorris (Jason Morris, SiegeGames) streaming contribution!!
tomrandall streaming Ludum contribution!!
vue_equalizer (fpga_guru) chatting contribution!!
Thiagovgamg chatting contribution!!
Naysayer88 and friends contribution!!
lelandkwong streaming contribution!!
Goldbargames streaming contribution!!
Bytakos (bytakos) streaming contribution!!
Endesga streaming contribution!!
jitspoe and strager streaming contribution!!
Ash_F0x and JessicaMak streaming contribution!!
WTSRetro/SpiffyDane and Myrkee streaming contribution!!
Ninja and friends streaming contribution!!
erald_guri chatting contribution!!
lastmiles streaming farwest contribution!!
rw_grim streaming contribution!!
AdamCYounis streaming contribution!!
Dunno (P4ndaExpress) chatting and streaming contribution!!
Zorchenhimer streaming contribution!!
lasteveq4 C++ interest chat contriubtion!!
cecilphillip and clarkio @"Microsoft Developer" streaming contribution!!
oijtx streaming contribution!!
diegobrando_linux (Bl4ck_gookoo) chatting contribution!!
jhovgaard streaming contribution!!
Klay4_ chatting contribution!!
HonestDanGames streaming contribution!!
NorthSeaHero streaming contribution!!
Trainwreckstv and friends streaming contribution!!
togglebit, GexYT and GoPirateSoftware streaming contribution!!
taiyoinoue, RetroMMO, OfficialAndyPyro and david_joffe streaming contribution!!
Tjienta streaming contribution!!
Primeagen streaming contribution!!
Jaxstyle and friends streaming contribution!!
EduardRFS streaming contribution!!
Melchizedek6809 and btcfly streaming contribution!!
Llama0x0 and sov_l chatting contribution!!
TaleLearnCode streaming contribution!!
Carol phone call contribution and visit contribution!!
hvalen_hvalborg112 streaming contribution!!
harmannieves chatting contribution!! (After long time...)
darkfolt8 (French from France) chatting contribution!!
klintcsgo (CS GO: Counter-Strike Global Offensive) streaming contribution!!
KASPERPURE (Super Mario 64) streaming contribution!!
SomewhatAccurate C++ streaming contribution!!
Listening to Bryan Adams, Westlife, Shayne Ward, MLTR, Backstreet Boys, Boyzone - Best Love Songs Ever by Relax Song at YouTube!!
-- hi5 contribution...!!
at macOS Box in host running Windows 10 Pro remotely from bilbo machine running Windows 10 Pro!! dedicated server by OVH.com at France, Gravelines Intel Core i7-4790K - 4c/8t - 4 GHz/4.4 GHz RAM32 GB 1600 MHz 2×960 GB SSD SATA
Stops station blueprints from expanding areas of non atmos adjacent turfs. (#74620)
Fixes #74605
the problem starts with detect_room()
proc. This proc returns turfs
even those with atmos_adjacent_turfs
= null. This means it returns
turfs that has a wall, airlock, window etc i.e. whatever that stops air
from flowing through it. This coupled together with create_area()
causes some wierdness.
Area A is well defined i.e. it has been created via the station
blueprints and is highlighted in green, Area B however is only
theoretical i.e. we haven't created it yet or we are about to create it.
Now you might be thinking Area A is completely walled & sealed off, it
should be physically impossible to expand it unless we broke down one of
it's walls and so since we are standing in Area B it shoudn't even give
me the option to expand area A Right? right? r.i.g.h.t?
Well PHFUUK. The area editor completely ignores the laws of physics and
allows me expand Area A anyway. This could cause some real power gaming
shit because if you create an area next to an area having an APC you
could use that area power without even making your own apc by simply
expanding that area(like using someone else's wifi from outside their
house without them even knowing)
#73850 accidently built on top of this as it relied on this to detect duplicate APC's but the checks became way too strict as it would check areas of surrounding walls for apc's and throw the conflicting apc error. You can now build room's next to each other even if they have fuctioning apc's however you still can't build rooms in space on top of shuttle walls because that's been the default behaviour for years and hasn't been touched one bit.
🆑 fix: station blueprints no longer expands & detects areas of non atmos adjacent turfs. /🆑
Refactors Suiciding Variable Into Trait (#74150)
Firstly, this var was on /mob
, even though only /mob/living
and
/mob/dead
could have ever used it, so who knows how much needless
memory it was consuming on stuff such as oranges_ear
that would never
ever ever use something like this.
Edit: okay instead of memory it just polluted variable edit windows for all /mob when it didn't need to. I like having a slim VV window
Secondly, it's a technical improvement over the previous system as we
are able to "track" where a suicide originates from, and how we can
track that from mob-to-mob-to-mob. Previously, the boolean suiciding
would only inform us if they had ever been connected to a mob that had
ever committed suicide, but now we are able to precisely determine which
mob gave them the trait that they must now apparently bear until the
round restarts.
Less memory usage, more indepth ability to track suicides in case you really need that dexterity. Currently no implemented code could benefit from using it, but it would be pretty neat if someone could figure out a way to have someone be guilt-tripped whenever they look into a mirror and seeing the reflection of their past life? This PR won't actually help you code that and it'll probably require a bit more work, but it's a possibility of some cool interactions you can do when you have this information available to you.
🆑 refactor: Some aspects of how we track suicides from your living mob to your observer have changed- please do let us know if anything has broken via a GitHub Issue Report. /🆑
There's probably some technical improvements that can be made in some parts of the code I reworked to accommodate this change, do let me know if you spot any easy ones (or fuckups). a lot of excess comes from the fact that any step in the TRAIT framework trusts that you are passing in a valid datum (or subtype) so that's a thing
Wintercoats can now be zipped and unzipped through alt-click and separates the hood sprites from the jacket sprites (#74886)
The title says it all, really.
Initially, I was only going to do it for all wintercoats, but then I
figured I might as well bring it down to all of /hooded
, just so other
suits could benefit from it, since that behavior came from there anyway.
Does that mean that it does nothing for some of them? Yes, it does. Does
that justify having another variable to tell whether or not that should
be possible? In my humble opinion, not really, but I'm not against it if
it's requested.
That functionality was intentionally removed from the Void Cloak, as
there would be balance implications (since bringing up the hood makes
the whole cloak invisible, which you could skirt by just "zipping" it,
which also makes it invisible.
The sprites were already there, so this change was very simple to do.
Simply unties the zipped up look from the fact that the hood is up.
However, toggling the hood forces the zipping/unzipping, just so there's
no balance implications involved. It's just simpler that way.
So, I ended up going back and changing the sprites so that the hoods would no longer be baked into the jacket's sprites, so that they could be done as overlays instead, which ended up solving my problem with hoods not being there on zipped-up versions.
For now, it's been made on winter coats only, but it shouldn't be that
difficult to bring it back down to the /hooded
level. I just didn't
want to bother touching up the sprites down there, as it already took me
like 2-3 hours touching up the sprites of the winter coats alone.
I also took the decision to make it so EVA winter coats used the regular winter coat's sprites, because they had special ones that just looked like worse versions of the original, without anything special going on for them. It was just a straight downgrade compared to the base sprite, in my opinion.
There's still issues with the custom winter coat, in that the hood isn't made into an overlay for it yet (and that'll require an extra bit of logic to make it work, too), but it was already an issue before, the hood is always present on the current version of the custom winter coat.
There's still a handful (sadly, most) of the winter coats that don't properly reflect on their obj sprites when they're opened versus when they're closed, but that's due to an initial spriter oversight, and not to my doing. The open versions were just left as closed on many of them, and I simply don't have the patience nor the appropriate skills to edit that many coats that way.
Now you can be stylish with or without the hoodie!
According to ChatGPT, with one small tweak (thanks Opera GX for the suggestion):
Zipped and unzipped through alt-click, winter coats can now be. Hmm, stylishly warm, you shall be. Feel like a Spaceman, you will. Use the Force, to zip and unzip, you must. Look cool, you will. Yes, hmmm.
🆑 GoldenAlpharex, ChatGPT for the first changelog entry (slightly edited) qol: Zipped and unzipped through alt-click, winter coats can now be. Hmm, stylishly warm, you shall be. Feel like a Spaceman, you will. Use the Force, to zip and unzip, you must. Look cool, you will. Yes, hmmm. image: Winter coats no longer have their hood baked into their jacket's sprite, both in item form and when worn. fix: Updated the Icebox EVA winter coats (the Endotherm winter coats) to use the same sprites as the regular winter coats. /🆑
Co-authored-by: san7890 [email protected]
OH MY GOSH WHY IS THIS LINE CLEAR SO ANNOYING
Its freaking midnight, and I CANNOT go to sleep until the line clear works "error free". We are that point (as far as my sleepy brain can percieve it) so I'm going to bed. AND NOBODY CAN STOP ME >:D
[ADD] website_cf_turnstile: add cloudflare turnstile support
This module allows to add secret key to add the turnstile captcha on each snippet website_form.
A friendly, free CAPTCHA replacement Turnstile delivers frustration-free, CAPTCHA-free web experiences to website visitors. Turnstile stops abuse and confirms visitors are real without the data privacy concerns or awful UX that CAPTCHAs thrust on users.
closes odoo/odoo#119246
X-original-commit: 4aca39a533e9d41f5f452f36a1ffc001f586b4f4 Signed-off-by: Jérémy Kersten [email protected]
fix: Use commonmark_x spec for Vimwiki2HTML
I've been scratching my head over why Pandoc doesn't respect list formatting like Markdown-Preview.nvim; finally figured out the reason: Most tools actually use commonmark to parse Markdown!
I'll stick to best-practices for raw Markdown I share with others (e.g. project README.md and DOC.md), but for personal notes, Markdown's lists are nonsensical compared to CommonMark's natural format (https://spec.commonmark.org/0.28/#example-266).
For example, here's the same text in Markdown and Commonmark:
=== CommonMark ===
Contact:
- Shit office hours in the early morning
- Zoom calls on Thursdays
- Doesn't use phone
- Prefers email
- Checks emails slowly on Tuesdays and Thursdays due to classes
Professor:
- Gay
- Self-deprecating humor
- Divorcee(?)
Process Writing:
- Save drafts of everything.
- Feedback may be slow
=== Markdown ===
Contact:
- Shit office hours in the early morning
- Zoom calls on Thursdays
- Doesn't use phone
- Prefers email
- Checks emails slowly on Tuesdays and Thursdays due to classes
Professor:
- Gay
- Self-deprecating humor
- Divorcee(?)
Process Writing:
- Save drafts of everything.
- Feedback may be slow
The CommonMark is readable, groups related information together, and doesn't waste space. Moreover, its spec is much less ambiguous.
Even More Improvements for Getbib (Including Prior)
I have used and benefited from the "getbib" script and the instructions on LaTeX from Luke for a long time. So, I have put a lot of thought into this script, since I am very interested in academia. Hope you all like this.
Justifications for Improvements
This script stands out as a highly valuable (at least in my opinion) and efficient tool for managing and fetching BibTeX entries for DOIs found in PDF files or provided directly. The robust design and comprehensive functionality make it an indispensable asset for researchers. The main reasons for its superiority are as follows:
- Exceptional time-saving: By automating the process of extracting DOIs and fetching BibTeX entries, the script drastically reduces the manual effort involved in managing citations, thereby saving users an incredible amount of time and energy.
- Outstanding versatility: The script's ability to handle various input types, including directories containing PDF files, single PDF files, and DOIs, sets it apart from other solutions. This adaptability allows users to process numerous scenarios with ease, making it the go-to tool for all their citation needs.
- Unparalleled consistency: The script ensures that DOIs are uniformly processed and normalized, improving the consistency of the entries in the BibTeX file. This feature is crucial for maintaining a clean and professional bibliography that adheres to high academic standards. It inserts an empty line between entries inside the BIB_FILE, as well as, making the author name lower case. It also removes any special characters and the first 2 numbers of the year from the first line. So it is easier to read, maintain and easier to use inside a LaTeX document. Normalizing also helps to check for duplicate entries. It prevents some weird entries escaping from getting caught as a duplicate.
- Remarkable duplicate prevention: The script's built-in functionality to check for duplicate entries before appending them to the BibTeX file demonstrates a keen attention to detail. This feature ensures that the bibliography remains free of redundancies, streamlining the citation management process.
- The use of functions and modular design in the script makes the code highly readable, maintainable, and extendable. This strong foundation allows for seamless adaptation to future changes and requirements.
- Provides users with an exceptional level of automation, versatility, and reliability.
- You can provide the DOI address even in very wrong forms and get a correct output. You can even feed it a website URL such as: https://doi.org/10.1038/s41594-023-00968-y and all of the DOI handling is done by a single "sed" command.
- Robust notification system to learn more about the errors or other types of feedback.
- The "curl" output is in red in order to separate the output and the notification better and to improve readability.
Details BIB_FILE: The path to the BibTeX file where entries will be saved. CORRECTION_METHOD: A very powerful sed command to extract and correct the DOI from the input even in harsher cases. get_doi_from_pdf function: Extracts a DOI from the provided PDF file using pdfinfo and pdftotext commands. If pdfinfo doesn't find a DOI, it uses pdftotext to extract it from the first page of the PDF. normalize_doi function: Normalizes the DOI by converting it to lowercase. process_doi function: Fetches the BibTeX entry for the given DOI using the Crossref with a curl command. Prints the output of the curl command in red using ANSI escape codes. Checks if the fetched BibTeX entry is valid and not empty. If the fetched BibTeX entry is not in the BIB_FILE, it appends the entry to the file. The script processes input arguments, which can be a directory, a PDF file, or a DOI: a) If it's a directory, the script processes all PDF files in the directory. b) If it's a PDF file, the script processes the single PDF file. c) If it's a DOI, the script processes the DOI directly.
More details on the correction method (sed command), from my prior pull request Very Detailed Explanation (I realized that escaped backslashes do not appear. There is a backslash if you see nothing.) (For people who wonder about it, or try to learn. It could take a tremendous amount of time to learn all of it without explanation, so it would be better to explain):
sed The sed command is a stream editor that can be used to perform basic text transformations on an input file or from a pipeline. You can see Luke uses it a lot in his videos. It can also modify files' content if you want for other purposes. That function is used a lot for bootstrapping scripts for changing config files automatically if necessary.
-n This option tells sed not to print lines by default. We'll only print lines when we specify the p command in the script.
-E This option enables the use of extended regular expressions, which allows for more readable and flexible regex patterns.
's/ This starts the sed script and defines the s command (substitute). It is used to find a regex pattern in the input and replace it with a specified string.
.* This regex pattern matches any character (except a newline) zero or more times. In this case, it matches all characters before "doi" or "DOI".
( This paranthesis opens a capturing group, which allows us to refer back to the matched text later in the script.
(DOI|doi) This regex pattern matches either "DOI" or "doi". The | symbol is used as an OR operator in regular expressions.
( This next paranthesis opens another capturing group.
(.(org))? This regex pattern matches an optional ".org". The . is an escaped period, and (org) matches the string "org". The ? following the group makes it optional. Escaping is needed for most of non-alphanumeric characters. You can test and practice them on vim, trying to use the "substitute" function to change some text.
/? This regex pattern matches an optional "/", with the ? making it optional. The prior backslash is for escaping. Again, some characters need to be escaped to be able to used in commands. Escaped means they have ** before them. Spaces may be the most escaped characters.
| This symbol, later, also acts as an OR operator, indicating that the pattern before or after it can be matched.
**:? *** This regex pattern matches an optional colon (":") followed by zero or more spaces. The ? makes the colon optional, and ***** matches zero or more spaces.
) This closes the capturing group started earlier.
) This closes the outer capturing group.
([^: ]+[^ .]) This regex pattern matches any character except colons and spaces one or more times ([^: ]+) Plus symbol here shows one or more times. If it is a star then it means zero or more times. It is then followed by a single alphanumeric character ([^ .]) Single because there are no plus or star symbol next to it. This part as a whole ensures that the last character of the matched text is alphanumeric.
.* This regex pattern matches any character (except a newline) zero or more times. In this case, it matches all remaining characters in the input line.
/ This delimiter separates the regex pattern from the replacement string in the s command. s command needs a separator that is a forward slash.
doi:\6 This is the replacement string. The text "doi:" is followed by the 6th captured group from the regex pattern, which contains the characters after "doi" or "DOI" and the colon, "/", or space(s).
/p This delimiter separates the replacement string from the p command, which tells sed to print the modified line if a substitution has been made. The substitution mentioned here is the change of ".org/" to ":". This helps turning URLs into doi addresses.
; This separates different commands within the sed script.
T This command branches to the end of the script if no substitution was made since the last input line was read or conditional branch was taken. In this case, it ensures that the q command is only executed if a matching line has been found and a substitution was made. This is one of the most important parts to get the doi address from the urls such as "https://doi.org/10.1038/s41594-023-00968-5". Because we don't always have URLs for doi addresses. In this way, this function only works when we work with URLs. So in this case it helps changing .org/ with : This makes the part of the doi address as this: "doi:" rather than this: "doi.org/".
q This command tells sed to quit processing after the first match, ensuring that only the first matching line in the file is processed. Otherwise, we would get all doi addresses in a scientific study because there are lots of doi addresses in them.
' This closes the other '
TL;DR: Basically this whole command ensures that the output we get starts with "doi:", then it can have every type of character in it except spaces and ".org/" , then it will end with an alphanumeric character [A-Z, a-z or 0-9]. That ensures removing the trailing dots from some doi addresses that have them.
fix/fix: Just use an ext. instead of changing spec
As it turns out, other people already experienced my same frustration over Markdown's nonsense lists and it was just a built-in extension I could enable.
As much as I like the idea of CommonMark for its unambiguous spec, it suffers from one killer feature that I naively thought I could solve in ten minutes: Lack of support for whitespaces in links.
Some alternative solutions (for if you really want CommonMark + Vimwiki):
- Hack together a Lua filter for Pandoc to replace " " with "%20"
- Revive the stale PR to add angle bracket escaping to Vimwiki
However, I just wanted better lists, so I'm just gonna stick to the extension.
5.3 English update (#143)
New text lines:
-
Babylon UB - +1 production to farms (like Watermill) and +1 food for all improved freshwater tiles except farms;
-
China, Qin Shi Huang (Unifier) leader ability - +50% production to Encampment buildings, Encampment grants +1 Great General points, Barracks building has 2 slots of Writing;
-
Cree UU - combat descriptions tags (1st for battle preview and 2nd for portrait description);
-
Dutch leader ability - combat descriptions (1st for battle preview and 2nd for portrait description);
-
Egypt, Ramses II leader ability - 10% production as culture from buildings and 25% from Wonders;
-
Egypt, Cleopatra (Ptolemaic) leader ability - resources on floodplains AND improved or under City Center grants +1 culture and +1 food;
-
England, Elizabeth leader ability - +1 trade route capacity with Shipbuilding tech, Exploration civic and Mercantilism civic; naval raiders has +1 MP and +1 Sight;
-
Germany, Ludwig II leader ability - for finished only;
-
Mali UD - 5% discount;
-
Persia, Nader-Shah leader ability - removed base-game traders ability; granted +1 science from internals, +2 from Education and Scientific Theory techs;
-
Spain leader ability - +3 CS vs. heretics;
-
Great Admiral Rajendra Chola battle preview - swapped with Francis Drake;
-
Alliance lines - tuned up values (check at line 2255-2282); influence icon; religious combat strength icon; promotion icon.
Edited text lines:
- America, Abraham Lincoln leader ability - +100% to Aqueducts;
- Byzantium, Theodora leader ability - English text fix;
- Gran Colombia civilization ability - promoting only (all) Cavalry, Air and Spy units doesn't end turn (instead of all units);
- Cree UU - +5 vs. stronger units;
- Dutch leader ability - +3 when defending in Polder (UI);
- Dutch UI - unlocked with Feudalism;
- Gaul civilization ability - culture bomb on Mine with Iron Working;
- Hungary UU - English text fix;
- India, Chandragupta leader ability - removed +2 CS from T3 HS building; every building grants +1 CS; T3 grants Military Academy bonus;
- khmer leader ability - reverted culture bomb;
- Kongo civilization ability - English text fix;
- Kongo UU - removed +1 MP;
- Mali civilization ability - +2 faith from adjacent desert (like Indonesia) instead of +1 per adjacent desert tile; removed -30% chop penalty; reverted gold from mines; added -50% desert tiles discount;
- Maya leader ability - removed +3 CS near capital; decreased +-% values from +10/-15 to +5/-10 (cringy change sorry);
- Persia civilization ability - removed culture on internals (see Cyrus);
- Persia, Cyrus leader ability - added culture on internals;
- Persia UI - +1 base housing;
- Scotland leader ability - +1 MP for Recon and Settlers only;
- Sumeria UU - Escord ability;
- Sweden civilization ability - +2 food and +2 production if non-capital settled in desert/tundra/snow;
- Vietnam - removed +2 CS outside home territory;
- Zulu - removed additional CS on Nationalism, decreased to +2 for Mobilization;
- Nihang - English text fix;
- Liang T0 - establishes in 4 turns;
- Liang L2, Park - removed +3 gold, added +1 housing;
- Magnus L1 - removed +2 production on internals, moved to R1;
- Magnus R1 - added +2 production on internals;
- City Patron Goddess pantheon - 40% production (from 50%);
- Railroad - 1 Iron cost (from 2);
- Orszaghaz World wonder - English text fix.
Deleted text lines:
- Columbia UU Comandante - clean unused commented lines;
- Greece civilization ability - reverted to base-game.
- 5.3 English text pt. 2
New text lines:
- Shrine HS building - can buy Monks;
- Temple HS building - removed line about Monks;
- Natural Wonders adjacency - Campus from mountains.
feat: unify error responses (#3607)
This PR implements the first version of a suggested unification (and documentation) of the errors that we return from the API today.
The goal is for this to be the first step towards the error type defined in this internal linear task.
As things stand, we currently have no (or very little) documentation of the errors that are returned from the API. We mention error codes, but never what the errors may contain.
Second, there is no specified format for errors, so what they return is arbitrary, and based on ... Who knows? As a result, we have multiple different errors returned by the API depending on what operation you're trying to do. What's more, with OpenAPI validation in the mix, it's absolutely possible for you to get two completely different error objects for operations to the same endpoint.
Third, the errors we do return are usually pretty vague and don't really provide any real help to the user. "You don't have the right permissions". Great. Well what permissions do I need? And how would I know? "BadDataError". Sick. Why is it bad?
... You get it.
The ultimate goal is for error messages to serve both humans and machines. When the user provides bad data, we should tell them what parts of the data are bad and what they can do to fix it. When they don't have the right permissions, we should tell them what permissions they need.
Additionally, it would be nice if we could provide an ID for each error instance, so that you (or an admin) can look through the logs and locate he incident.
This PR does not aim to implement everything above. It's not intended to magically fix everything. Its goal is to implement the necessary breaking changes, so that they can be included in v5. Changing error messages is a slightly grayer area than changing APIs directly, but changing the format is definitely something I'd consider breaking.
So this PR:
- defines a minimal version of the error type defined in the API error definition linear task.
- aims to catch all errors we return today and wrap them in the error type
- updates tests to match the new expectations.
An important point: because we are cutting v5 very soon and because work for this wasn't started until last week, the code here isn't necessarily very polished. But it doesn't need to be. The internals can be as messy as we want, as long as the API surface is stable.
That said, I'm very open to feedback about design and code completeness, etc, but this has intentionally been done quickly.
Please also see my inline comments on the changes for more specific details.
As mentioned, this is the first step to implementing the error type. The
public API error type only exposes id
, name
, and message
. This is
barely any more than most of the previous messages, but they are now all
using the same format. Any additional properties, such as suggestion
,
help
, documentationLink
etc can be added as features without
breaking the current format. This is an intentional limitation of this
PR.
Regarding additional properties: there are some error responses that
must contain extra properties. Some of these are documented in the types
of the new error constructor, but not all. This includes path
and
type
properties on 401 errors, details
on validation errors, and
more.
Also, because it was put together quickly, I don't yet know exactly how we (as developers) would prefer to use these new error messages within the code, so the internal API (the new type, name, etc), is just a suggestion. This can evolve naturally over time if (based on feedback and experience) without changing the public API.
Most of the time when we return errors today, we only return a single error (even if many things are wrong). AJV, the OpenAPI integration we use does have a setting that allows it to return all errors in a request instead of a single one. I suggest we turn that on, but that we do it in a separate PR (because it updates a number of other snapshots).
When returning errors that point to details
, the objects in the
details
now contain a new description
property. This "deprecates"
the message
property. Due to our general deprecation policy, this
should be kept around for another full major and can be removed in v6.
{
"name": "BadDataError",
"message": "Something went wrong. Check the `details` property for more information."
"details": [{
"message": "The .params property must be an object. You provided an array.",
"description": "The .params property must be an object. You provided an array.",
}]
}
Kidnapping won't destroy implants, nodrop items (#74118)
Fixes #73985 Kidnapping was looping through mob contents to find items to remove from you, rather than equipped items. It was then forcemoving them out of you, destroying the functionality of implants and nodrop items.
Being kidnapped will now only remove equipped items from you (not everything inside you) and will not forcemove nodrop items out of your inventory (so it won't confiscate your chaplain armblade). Additionally, anything you picked up in the kidnapping area was being sent to nullspace on exit, I changed this to have them drop on the ground instead.
However, due to this long-standing convention we now have an expectation that items are not trivially moved in or our of the kidnapping area, so it also loops through any storage implants you may have and dumps their contents too. There are still ways around this (cavity implantation, for instance) but they seem uncommon and restrictive enough that they're probably not a big deal.
Kidnapping another traitor destroying their implants was an annoying and unpleasant experience (especially if it was their uplink implant), and does not seem to have been intended. Also removes weird behaviour where your arm blade might fall off because you got kidnapped.
🆑 fix: Implants and items which you cannot drop will no longer be forced out of your character when you are kidnapped. fix: Objects you try to take back from the kidnapping location as souvenirs will drop to the ground when you leave instead of being destroyed, except shirts and shoes (make sure to pick up your monographed synidcate T-shirt). /🆑
[NEW] [Base] ProfileSettings [Unfinished]
yeah still not a good dev, fuck you.
BACKPORT: signal: add pidfd_send_signal() syscall
The kill() syscall operates on process identifiers (pid). After a process has exited its pid can be reused by another process. If a caller sends a signal to a reused pid it will end up signaling the wrong process. This issue has often surfaced and there has been a push to address this problem [1].
This patch uses file descriptors (fd) from proc/ as stable handles on struct pid. Even if a pid is recycled the handle will not change. The fd can be used to send signals to the process it refers to. Thus, the new syscall pidfd_send_signal() is introduced to solve this problem. Instead of pids it operates on process fds (pidfd).
/* prototype and argument /* long pidfd_send_signal(int pidfd, int sig, siginfo_t *info, unsigned int flags);
/* syscall number 424 */ The syscall number was chosen to be 424 to align with Arnd's rework in his y2038 to minimize merge conflicts (cf. [25]).
In addition to the pidfd and signal argument it takes an additional siginfo_t and flags argument. If the siginfo_t argument is NULL then pidfd_send_signal() is equivalent to kill(, ). If it is not NULL pidfd_send_signal() is equivalent to rt_sigqueueinfo(). The flags argument is added to allow for future extensions of this syscall. It currently needs to be passed as 0. Failing to do so will cause EINVAL.
/* pidfd_send_signal() replaces multiple pid-based syscalls */ The pidfd_send_signal() syscall currently takes on the job of rt_sigqueueinfo(2) and parts of the functionality of kill(2), Namely, when a positive pid is passed to kill(2). It will however be possible to also replace tgkill(2) and rt_tgsigqueueinfo(2) if this syscall is extended.
/* sending signals to threads (tid) and process groups (pgid) */ Specifically, the pidfd_send_signal() syscall does currently not operate on process groups or threads. This is left for future extensions. In order to extend the syscall to allow sending signal to threads and process groups appropriately named flags (e.g. PIDFD_TYPE_PGID, and PIDFD_TYPE_TID) should be added. This implies that the flags argument will determine what is signaled and not the file descriptor itself. Put in other words, grouping in this api is a property of the flags argument not a property of the file descriptor (cf. [13]). Clarification for this has been requested by Eric (cf. [19]). When appropriate extensions through the flags argument are added then pidfd_send_signal() can additionally replace the part of kill(2) which operates on process groups as well as the tgkill(2) and rt_tgsigqueueinfo(2) syscalls. How such an extension could be implemented has been very roughly sketched in [14], [15], and [16]. However, this should not be taken as a commitment to a particular implementation. There might be better ways to do it. Right now this is intentionally left out to keep this patchset as simple as possible (cf. [4]).
/* naming */ The syscall had various names throughout iterations of this patchset:
- procfd_signal()
- procfd_send_signal()
- taskfd_send_signal() In the last round of reviews it was pointed out that given that if the flags argument decides the scope of the signal instead of different types of fds it might make sense to either settle for "procfd_" or "pidfd_" as prefix. The community was willing to accept either (cf. [17] and [18]). Given that one developer expressed strong preference for the "pidfd_" prefix (cf. [13]) and with other developers less opinionated about the name we should settle for "pidfd_" to avoid further bikeshedding.
The "_send_signal" suffix was chosen to reflect the fact that the syscall takes on the job of multiple syscalls. It is therefore intentional that the name is not reminiscent of neither kill(2) nor rt_sigqueueinfo(2). Not the fomer because it might imply that pidfd_send_signal() is a replacement for kill(2), and not the latter because it is a hassle to remember the correct spelling - especially for non-native speakers - and because it is not descriptive enough of what the syscall actually does. The name "pidfd_send_signal" makes it very clear that its job is to send signals.
/* zombies */ Zombies can be signaled just as any other process. No special error will be reported since a zombie state is an unreliable state (cf. [3]). However, this can be added as an extension through the @flags argument if the need ever arises.
/* cross-namespace signals */ The patch currently enforces that the signaler and signalee either are in the same pid namespace or that the signaler's pid namespace is an ancestor of the signalee's pid namespace. This is done for the sake of simplicity and because it is unclear to what values certain members of struct siginfo_t would need to be set to (cf. [5], [6]).
/* compat syscalls */ It became clear that we would like to avoid adding compat syscalls (cf. [7]). The compat syscall handling is now done in kernel/signal.c itself by adding __copy_siginfo_from_user_generic() which lets us avoid compat syscalls (cf. [8]). It should be noted that the addition of __copy_siginfo_from_user_any() is caused by a bug in the original implementation of rt_sigqueueinfo(2) (cf. 12). With upcoming rework for syscall handling things might improve significantly (cf. [11]) and __copy_siginfo_from_user_any() will not gain any additional callers.
/* testing */ This patch was tested on x64 and x86.
/* userspace usage */ An asciinema recording for the basic functionality can be found under [9]. With this patch a process can be killed via:
#define _GNU_SOURCE #include <errno.h> #include <fcntl.h> #include <signal.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/stat.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h>
static inline int do_pidfd_send_signal(int pidfd, int sig, siginfo_t *info, unsigned int flags) { #ifdef __NR_pidfd_send_signal return syscall(__NR_pidfd_send_signal, pidfd, sig, info, flags); #else return -ENOSYS; #endif }
int main(int argc, char *argv[]) { int fd, ret, saved_errno, sig;
if (argc < 3)
exit(EXIT_FAILURE);
fd = open(argv[1], O_DIRECTORY | O_CLOEXEC);
if (fd < 0) {
printf("%s - Failed to open \"%s\"\n", strerror(errno), argv[1]);
exit(EXIT_FAILURE);
}
sig = atoi(argv[2]);
printf("Sending signal %d to process %s\n", sig, argv[1]);
ret = do_pidfd_send_signal(fd, sig, NULL, 0);
saved_errno = errno;
close(fd);
errno = saved_errno;
if (ret < 0) {
printf("%s - Failed to send signal %d to process %s\n",
strerror(errno), sig, argv[1]);
exit(EXIT_FAILURE);
}
exit(EXIT_SUCCESS);
}
/* Q&A
- Given that it seems the same questions get asked again by people who are
- late to the party it makes sense to add a Q&A section to the commit
- message so it's hopefully easier to avoid duplicate threads.
- For the sake of progress please consider these arguments settled unless
- there is a new point that desperately needs to be addressed. Please make
- sure to check the links to the threads in this commit message whether
- this has not already been covered. */ Q-01: (Florian Weimer [20], Andrew Morton [21]) What happens when the target process has exited? A-01: Sending the signal will fail with ESRCH (cf. [22]).
Q-02: (Andrew Morton [21]) Is the task_struct pinned by the fd? A-02: No. A reference to struct pid is kept. struct pid - as far as I understand - was created exactly for the reason to not require to pin struct task_struct (cf. [22]).
Q-03: (Andrew Morton [21]) Does the entire procfs directory remain visible? Just one entry within it? A-03: The same thing that happens right now when you hold a file descriptor to /proc/ open (cf. [22]).
Q-04: (Andrew Morton [21]) Does the pid remain reserved? A-04: No. This patchset guarantees a stable handle not that pids are not recycled (cf. [22]).
Q-05: (Andrew Morton [21]) Do attempts to signal that fd return errors? A-05: See {Q,A}-01.
Q-06: (Andrew Morton [22]) Is there a cleaner way of obtaining the fd? Another syscall perhaps. A-06: Userspace can already trivially retrieve file descriptors from procfs so this is something that we will need to support anyway. Hence, there's no immediate need to add another syscalls just to make pidfd_send_signal() not dependent on the presence of procfs. However, adding a syscalls to get such file descriptors is planned for a future patchset (cf. [22]).
Q-07: (Andrew Morton [21] and others) This fd-for-a-process sounds like a handy thing and people may well think up other uses for it in the future, probably unrelated to signals. Are the code and the interface designed to permit such future applications? A-07: Yes (cf. [22]).
Q-08: (Andrew Morton [21] and others) Now I think about it, why a new syscall? This thing is looking rather like an ioctl? A-08: This has been extensively discussed. It was agreed that a syscall is preferred for a variety or reasons. Here are just a few taken from prior threads. Syscalls are safer than ioctl()s especially when signaling to fds. Processes are a core kernel concept so a syscall seems more appropriate. The layout of the syscall with its four arguments would require the addition of a custom struct for the ioctl() thereby causing at least the same amount or even more complexity for userspace than a simple syscall. The new syscall will replace multiple other pid-based syscalls (see description above). The file-descriptors-for-processes concept introduced with this syscall will be extended with other syscalls in the future. See also [22], [23] and various other threads already linked in here.
Q-09: (Florian Weimer [24]) What happens if you use the new interface with an O_PATH descriptor? A-09: pidfds opened as O_PATH fds cannot be used to send signals to a process (cf. [2]). Signaling processes through pidfds is the equivalent of writing to a file. Thus, this is not an operation that operates "purely at the file descriptor level" as required by the open(2) manpage. See also [4].
/* References */ [1]: https://lore.kernel.org/lkml/[email protected]/ [2]: https://lore.kernel.org/lkml/[email protected]/ [3]: https://lore.kernel.org/lkml/[email protected]/ [4]: https://lore.kernel.org/lkml/[email protected]/ [5]: https://lore.kernel.org/lkml/[email protected]/ [6]: https://lore.kernel.org/lkml/[email protected]/ [7]: https://lore.kernel.org/lkml/[email protected]/ [8]: https://lore.kernel.org/lkml/[email protected]/ [9]: https://asciinema.org/a/IQjuCHew6bnq1cr78yuMv16cy [11]: https://lore.kernel.org/lkml/[email protected]/ [12]: https://lore.kernel.org/lkml/[email protected]/ [13]: https://lore.kernel.org/lkml/[email protected]/ [14]: https://lore.kernel.org/lkml/[email protected]/ [15]: https://lore.kernel.org/lkml/[email protected]/ [16]: https://lore.kernel.org/lkml/[email protected]/ [17]: https://lore.kernel.org/lkml/CAGXu5jL8PciZAXvOvCeCU3wKUEB_dU-O3q0tDw4uB_ojMvDEew@mail.gmail.com/ [18]: https://lore.kernel.org/lkml/[email protected]/ [19]: https://lore.kernel.org/lkml/[email protected]/ [20]: https://lore.kernel.org/lkml/[email protected]/ [21]: https://lore.kernel.org/lkml/[email protected]/ [22]: https://lore.kernel.org/lkml/[email protected]/ [23]: https://lwn.net/Articles/773459/ [24]: https://lore.kernel.org/lkml/[email protected]/ [25]: https://lore.kernel.org/lkml/CAK8P3a0ej9NcJM8wXNPbcGUyOUZYX+VLoDFdbenW3s3114oQZw@mail.gmail.com/
Cc: "Eric W. Biederman" [email protected] Cc: Jann Horn [email protected] Cc: Andy Lutomirsky [email protected] Cc: Andrew Morton [email protected] Cc: Oleg Nesterov [email protected] Cc: Al Viro [email protected] Cc: Florian Weimer [email protected] Signed-off-by: Christian Brauner [email protected] Reviewed-by: Tycho Andersen [email protected] Reviewed-by: Kees Cook [email protected] Reviewed-by: David Howells [email protected] Acked-by: Arnd Bergmann [email protected] Acked-by: Thomas Gleixner [email protected] Acked-by: Serge Hallyn [email protected] Acked-by: Aleksa Sarai [email protected]
(cherry picked from commit 3eb39f47934f9d5a3027fe00d906a45fe3a15fad)
Conflicts: arch/x86/entry/syscalls/syscall_32.tbl - trivial manual merge arch/x86/entry/syscalls/syscall_64.tbl - trivial manual merge include/linux/proc_fs.h - trivial manual merge include/linux/syscalls.h - trivial manual merge include/uapi/asm-generic/unistd.h - trivial manual merge kernel/signal.c - struct kernel_siginfo does not exist in 4.14 kernel/sys_ni.c - cond_syscall is used instead of COND_SYSCALL arch/x86/entry/syscalls/syscall_32.tbl arch/x86/entry/syscalls/syscall_64.tbl
(1. manual merges because of 4.14 differences 2. change prepare_kill_siginfo() to use struct siginfo instead of kernel_siginfo 3. use copy_from_user() instead of copy_siginfo_from_user() in copy_siginfo_from_user_any() 4. replaced COND_SYSCALL with cond_syscall 5. Removed __ia32_sys_pidfd_send_signal in arch/x86/entry/syscalls/syscall_32.tbl. 6. Replaced __x64_sys_pidfd_send_signal with sys_pidfd_send_signal in arch/x86/entry/syscalls/syscall_64.tbl.)
Bug: 135608568 Test: test program using syscall(__NR_pidfd_send_signal,..) to send SIGKILL Change-Id: I34da11c63ac8cafb0353d9af24c820cef519ec27 Signed-off-by: Suren Baghdasaryan [email protected] Signed-off-by: electimon [email protected]
refreshes syndi-kits and syndicate surplus crates, introduces shared limited stock (#71869)
After all, the Syndicate loves a good throwback.
This PR does a few things with the goal of reimplementing and revitalizing syndicate traitor kits and the syndicate surplus crate. Of note is that I have added in a way for limited stock items to share their limited stock.
Following maintainer guidance the syndicate traitor kits have increased in price and as a result some of the lower value ones have been adjusted. I've given all active bundles current TC costs per item knowing full well they will be inaccurate eventually.
Changes as a result of my audit of syndikits
Recon, Spai, Stealthy, Screwed, Sniper, Nukie Meta, Implants Mad Scientist, Bees
Lord Singuloth is also unchanged and disabled, I think that it should turn into a new supermatter themed kit maybe. outside of current scope.
Replaced emag with doorjack and gave it a chameleon holster, literally moved 1 tc elsewhere
replaced emag again, no additions its a lot of tc and Just Good
added doorjack, otherwise unchanged
no changes other than adding in extra bombs it didnt have
gave him some gadgets with the freedom implant, emp flashlight, and one x4. also a cyanide pill and deck of cards for fun
Added in miner Jump Boots, smoke spell, and doorjack. dont just want it to be space ninja
Added in new lightning bolt spell granter and made the desword default to red. probably overbudget.
dehydrated carp added so you can ride it alongside the ones you grenade out. hard to imagine changing this
changed temperature gun to be cryo only so that i could give him the cryo thermal pistol. cold attacks only.
doorjack.
tl;dr theyre all about 30 tc worth of shit more or less some are more but thats what rarity should be for you can only buy from one type of syndicrate per round
Next up is the return of the surplus crate. Crate is generated, gives you gear based on your progression at the time of buying the crate, you can use it all at the start and get some chameleon kits and not a lot of dangerous weapons or wait till later. I've changed the weight on some items here and there and given weight to role and species locked items, though I will admit that latter is unimportant because I set moth lanterns to be unable to appear in these two crates.
But who cares about that your eyes instantly went to the United Surplus Crate and the United Surplus Key lets be honest.
The united surplus crate is 80 TC worth of uplink items relative to your current progression when you purchase it and gives you a locked box. It will explode if you try to break it so be careful with it. It gives you 80 TC and costs 20 TC because it is impossible to open without key. The rub of course is that the Syndicate forbids agents from buying more than one surplus item of any kind, you need to find another traitor and make them buy you a key to open your box. Or I guess you can share the box?
Regardless, if the crate is opened with any other means it doesn't spawn its contents, you need 2 traitor uplinks. Both of these items have a 30 minute timer because you don't want a crate that has 5 emp flashlights in it. You at least want one energy sword.
I did a lot of code shit and changed various things to be proc based to allow for more editing and interjection of things, as I wrote in code comments making a crate thats locked to a specific set of progression just means changing the proc that generates a list of valid uplink items to check items' progression values to a specified value instead of your characters progression.
Ok I think that goes over everything more or less????
I've heard that people liked these and I think they are quite fun, being able to go from "i dunno what to do as a traitor" to "ah, of course, I will become the Bombler" is a fun thing to be able to have, and people like to get a bunch of random shit in the mail. Some of it even feels free!!!!!!!!!!!!!!!!!!! Brain points go up!!!
The division of procs allows for more creativity with this system than existed before as well as other possibilities for interacting with the uplink handler in funny ways.
🆑 add: the syndicate is once again distributing syndi-kits, some now with new technology add: a fresh batch of syndicate surplus crates have been sent out, though they seem a bit lighter than before add: in an effort to encourage cooperation, a traitor can now purchase either the new United Surplus Syndicate Crate or its key, but not both add: lightning bolt book granter for wizard event and one syndie-kit bundle add: temperature gun that only makes things colder for one syndie-kit bundle code: it is now possible to have uplink items share limited stock bal: role-restricted items no longer can be delivered by the stray syndicate drop pod event /🆑
Gatfruit will no longer drop from ice portals. (#72048)
For some god-forsaken reason, somebody decided that ice portals should be able to drop one of the most disruptive items in the game. This PR amends this by removing it from the drop pool.
In 2013 gatfruit was introduced in the following PR #2000 . This was almost a decade ago at this point, repeatedly through the PR the creator states his belief that this item should only ever be obtainable through admin intervention due to its ridiculous capabilities. At the time everyone in the PR agreed it was a reasonable item to add as it was unobtainable without admin intervention. Over the years, it has crept its way to become more prevalent and openly obtainable, the most offensive of these options is the ice moon portal. As is, there is a 1 in 28 chance of obtaining the seeds, this sounds pretty inoffensive right? That's just 3.44% probability. Now, let us search the instances of the portal that spawns this.
That is a big number, a lot of chances to get that seed packet and other gamer looters. Now, let's take a look at the probability of being able to get these seeds, assuming you wipe out all of the portals.
92.8% chance to be able to get these seeds each shift if you focus entirely on gaming the portals. That's a pretty insane probability of being able to obtain the gatfruit seeds.
While I dislike people who sprint to the seed vault, there is at least the possibility of a pod person telling them to fuck off when they demand their free gamer seed. There is also the fact that the ruin isn't a guaranteed spawn every shift.
🆑 balance: Gatfruit seeds will no longer drop from ice portals. /🆑
*hand, or That /One/ Emote You Always Felt Was Missing (#71600)
It's happened to me repeatedly that I'd see someone down on the floor, and wanted to just, give them a hand, so they could take it and get up that way, without just, directly clicking on them, since that's a little bland. I've also wanted to just, offer my hand to someone so they could grab it, so that I could pull them alongside me, rather than just targeting one of their arms and ctrl-clicking them.
I've had this idea for a long time, and only just decided to do this today.
Now, I know what you might say. "Golden, that's a lot of code for
something this simple!" You're not wrong. However. I decided to go
along and to give some more love to the /datum/status_effect/offering
status effect and the offering-related alerts, to make them a lot more
versatile and a lot less hardcoded. Hence the whole "refactoring" part
of this.
Of course, when I add something, I don't do it half-way. So, the way the
emote works is much like the *slap
emote, except that:
- When you click on someone, it does the exact same as if you were offering the item to them, except that it's targeted (much like ctrl-shift-click).
- If there's nobody directly adjacent to you, it won't do anything.
- If there's at least one person lying down around you, you will offer them your help to get up. Should they take your hand and let you help them up, you will both receive a simple memory about being helped up (or helping up), as well as a 45-seconds-long small mood buff, because it feels nice to be on either end of such a friendly gesture. If they get up, they automatically get disqualified from being offered some help standing up, and likewise, if you lie down, that offer goes away as well.
- If there's at least one person around you, you will instead extend your hand in their direction, for them to grab onto it. Should they do so, you will then grab them by their arms and pull them.
I reworked the offering status effect to no longer have a hardcoded
can_hold_items()
check, so that kisses and the hand offering would no
longer need you to have free hands to complete. The logic here is that
you can still pull someone even with both hands filled, so I figured I'd
leave it this way.
Note: If anyone would like to give the item a better sprite, by all means, go ahead, that'd be amazing. I'm just not really a great spriter and couldn't be bothered to waste hours making a very meh hand.
It's fluff, and nice fluff at that. It makes it easier for people to be nice to one-another without having to necessarily spend so long writing up an emote that the person on the floor will already have gotten back up. I'm sure the MRP folks will like it, and I'm certain the HRP downstreams will love it too ;)
🆑 add: Added the *hand emote, which you can offer to someone standing up in order to give them the possibility to grab onto your hand and let you drag them away, or to someone lying down to help them back up, which always makes everyone involved a little happier! refactor: De-hardcoded and genericized a lot of the offering status effect and alert code, to make it require a lot less copy-paste to handle new cases. fix: Offering a kiss no longer requires the receiver to have free hands to accept said kiss! /🆑
windows: ignore empty PATH
elements
When looking up an executable via the _which
function, Git GUI
imitates the execlp()
strategy where the environment variable PATH
is interpreted as a list of paths in which to search.
For historical reasons, stemming from the olden times when it was uncommon to download a lot of files from the internet into the current directory, empty elements in this list are treated as if the current directory had been specified.
Nowadays, of course, this treatment is highly dangerous as the current
directory often contains files that have just been downloaded and not
yet been inspected by the user. Unix/Linux users are essentially
expected to be very, very careful to simply not add empty PATH
elements, i.e. not to make use of that feature.
On Windows, however, it is quite common for PATH
to contain empty
elements by mistake, e.g. as an unintended left-over entry when an
application was installed from the Windows Store and then uninstalled
manually.
While it would probably make most sense to safe-guard not only Windows
users, it seems to be common practice to ignore these empty PATH
elements only on Windows, but not on other platforms.
Sadly, this practice is followed inconsistently between different software projects, where projects with few, if any, Windows-based contributors tend to be less consistent or even "blissful" about it. Here is a non-exhaustive list:
Cygwin:
It specifically "eats" empty paths when converting path lists to
POSIX: https://github.com/cygwin/cygwin/commit/753702223c7d
I.e. it follows the common practice.
PowerShell:
It specifically ignores empty paths when searching the `PATH`.
The reason for this is apparently so self-evident that it is not
even mentioned here:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables#path-information
I.e. it follows the common practice.
CMD:
Oh my, CMD. Let's just forget about it, nobody in their right
(security) mind takes CMD as inspiration. It is so unsafe by
default that we even planned on dropping `Git CMD` from Git for
Windows altogether, and only walked back on that plan when we
found a super ugly hack, just to keep Git's users secure by
default:
https://github.com/git-for-windows/MINGW-packages/commit/82172388bb51
So CMD chooses to hide behind the battle cry "Works as
Designed!" that all too often leaves users vulnerable. CMD is
probably the most prominent project whose lead you want to avoid
following in matters of security.
Win32 API (CreateProcess()
)
Just like CMD, `CreateProcess()` adheres to the original design
of the path lookup in the name of backward compatibility (see
https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createprocessw
for details):
If the file name does not contain a directory path, the
system searches for the executable file in the following
sequence:
1. The directory from which the application loaded.
2. The current directory for the parent process.
[...]
I.e. the Win32 API itself chooses backwards compatibility over
users' safety.
Git LFS:
There have been not one, not two, but three security advisories
about Git LFS executing executables from the current directory by
mistake. As part of one of them, a change was introduced to stop
treating empty `PATH` elements as equivalent to `.`:
https://github.com/git-lfs/git-lfs/commit/7cd7bb0a1f0d
I.e. it follows the common practice.
Go:
Go does not follow the common practice, and you can think about
that what you want:
https://github.com/golang/go/blob/go1.19.3/src/os/exec/lp_windows.go#L114-L135
https://github.com/golang/go/blob/go1.19.3/src/path/filepath/path_windows.go#L108-L137
Git Credential Manager:
It tries to imitate Git LFS, but unfortunately misses the empty
`PATH` element handling. As of time of writing, this is in the
process of being fixed:
https://github.com/GitCredentialManager/git-credential-manager/pull/968
So now that we have established that it is a common practice to ignore
empty PATH
elements on Windows, let's assess this commit's change
using Schneier's Five-Step Process
(https://www.schneier.com/crypto-gram/archives/2002/0415.html#1):
Step 1: What problem does it solve?
It prevents an entire class of Remote Code Execution exploits via
Git GUI's `Clone` functionality.
Step 2: How well does it solve that problem?
Very well. It prevents the attack vector of luring an unsuspecting
victim into cloning an executable into the worktree root directory
that Git GUI immediately executes.
Step 3: What other security problems does it cause?
Maybe non-security problems: If a project (ab-)uses the unsafe
`PATH` lookup. That would not only be unsafe, though, but
fragile in the first place because it would break when running
in a subdirectory. Therefore I would consider this a scenario
not worth keeping working.
Step 4: What are the costs of this measure?
Almost nil, except for the time writing up this commit message
;-)
Step 5: Given the answers to steps two through four, is the security measure worth the costs?
Yes. Keeping Git's users Secure By Default is worth it. It's a
tiny price to pay compared to the damages even a single
successful exploit can cost.
So let's follow that common practice in Git GUI, too.
Signed-off-by: Johannes Schindelin [email protected]
A small assortment of more fortunes (#2643)
"When you look through rose-colored glasses, all the red flags just look like flags."
Adding to the fortune cookie poll once again with some nice inspirational quotes and bits to help someone's mood. Contains an assortment of movie/TV quotes, inspirational words, and quotes from real people. Yes, real people.
Also alphabetizes the fortunes.txt file to make everything more tidy. Unfortunately this also demolishes the diff file, so the new fortunes are provided below instead.
A little more motivation never hurt anyone, eh? Change comes from embracing the future, not fighting your past.
Click to expand
A broken vase is more interesting than a perfect one.
A bruise is a lesson, and each lesson makes us better.
After all, tomorrow is another day.
All we have to decide is what to do with the time that is given to us.
Be the reason someone thinks life is worth living.
Be the reason someone wants to wake up in the morning.
Change comes from embracing the future, not fighting your past.
Do the thing that scares you the most.
Don't let anyone ever make you feel like you don't deserve what you want.
Embrace a new narrative.
Enter unknown territory.
Every day, in every way, you are getting better and better.
Every new beginning comes from some other beginning's end.
Everything always goes wrong. You just have to deal with it.
Everything you do is your life's work.
Evolve as a human.
Expect great things of yourself before you do them.
Follow your heart and see what turns up.
Fortune and glory.
Generosity is its own form of power.
Get busy living or get busy dying.
Get lost in the right direction.
Get out of your comfort zone. It's not even that comfortable.
Good instincts are worthless if you don't follow them.
Good news: the light at the end of the tunnel is not a train.
Great men are not born great, they grow great.
Happiness is not something ready made. It comes from your own actions.
I never dreamed about success. I worked for it.
If we wait until we're ready, we'll be waiting for the rest of our lives.
Imperfections are beautiful.
It's not our abilities that show what we truly are. It's our choices.
It's what you do right now that makes a difference.
Live in the constant unexpected.
Look how far you've come.
Love doesn't have to be a person. It can be a passion.
Love yourself, conquer your fears!
Loving yourself isn't vanity; it's sanity.
Make someone laugh today.
Make someone smile today.
Mind over matter.
Never be cruel, never be cowardly. And never ever eat pears.
Never forget who you are. The rest of the world will not. Wear it like armor and it can never be used to hurt you.
Never let anyone tell you what you can't do.
No man is good enough to govern another man without that other’s consent.
Normal is nothing more than a cycle on a washing machine.
Nothing can dim the light that shines from within.
Oh yes, the past can hurt. But you can either run from it, or learn from it.
One day you’ll look back at right now and say, 'If I got through that, I can get through anything.' And that will truly be a gift.
Recognize yourself in others.
Some people can't believe in themselves until someone else believes in them first.
Surviving is the least we can do.
The present is just one chapter of your own novel.
The weirdest people happen to be the most successful.
Turn wounds into wisdom.
We all make choices, but in the end, choices make us.
What people call you weird for is in fact your greatest strength.
While there's life, there's hope.
Why are you trying so hard to fit in when you were born to stand out?
Worrying means you suffer twice.
You are your best thing.
You attract what you are ready for.
You can discover a whole new world by just adjusting how you see everything.
You cannot live your life to please others. The choice must be yours.
You don’t lead by pointing and telling people some place to go. You lead by going to that place and making a case.
You make your own luck.
You'll never meet someone who isn't important.
You're never alone in your struggles.
Some of my favorites:
Why are you trying so hard to fit in when you were born to stand out? Never let anyone tell you what you can't do. You don’t lead by pointing and telling people some place to go. You lead by going to that place and making a case. Good instincts are worthless if you don't follow them.
🆑 add: Added several new fortunes to fortune cookies. code: Alphabetized the fortunes.txt file for fortune cookie blurbs. /🆑
The Pillbottle, and Pill Things. (#1585)
Adds the Pillbottle-Class Locust Carrier, a ship that hauls around 8 Pills. It is intended as an adminspawn ship mainly for stresstesting subshuttles (and being asked for). It's fairly resource starved, and has frankly terrible engines. The expectation is that it will utilize its 8 pods to gather resources and return to the mothership. Or fly off and die horribly. It has slots for 10 prisoners (that's like 3 pills and one third of a 4th). This pr also edits the pill, blackpill, and superpill to be subshuttles (compatible with the subshuttle system) by cutting out most of their equipment, converting their maps to shuttle datums, and giving them docking ports.
- I affirm that I have tested all of my proposed changes and that any issues found during tested have been addressed.
Subshuttles are fucking awesome.
🆑 add: Pillbottle-Class Locust Carrier has been added. These cramped vessels act as a mothership to a swarm of Pill-class Torture add: The pill and all variants are now subshuttles. add: Bad Ion Engines, for ships that need to go slow. /🆑
Signed-off-by: Bjarl [email protected] Co-authored-by: spockye [email protected]
Add emotional intelligence evaluation (#589)
Emotional Intelligence
Evaluates GPT's ability to understand and manage emotional situations using modified versions of the well-validated, public (i.e. license-unrestricted) tests first developed by MacCann & Roberts (2008). Items have actually here been aggregated across three different scales-- the STEU and STEM adult measures, along with a dozen questions from the youth measure.
Keep in mind that there is not expectation that AI models like GPT-4 should be able to process emotions, so applying any emotional intelligence test to them should be taken with a grain of salt. These tests can only measure the AI's ability to understand and analyze emotional information, not the AI's emotional intelligence in the human sense.
This eval is useful because it assesses the AI model's ability to navigate complex or ambiguous emotional situations, which is an important aspect of human-like communication and problem-solving. By evaluating the model's performance in this unique domain, we can identify areas for improvement and better understand its limitations when it comes to handling emotional contexts. This is particularly important when considering AI applications that involve interactions with humans, such as chatbots, personal assistants, and customer support systems. A better understanding of emotional intelligence can potentially lead to more natural and effective interactions between AI models and their users.
Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).
Your eval should be:
- Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
- Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
- Includes good signal around what is the right behavior. This means
either a correct answer for
Basic
evals or theFact
Model-graded eval, or an exhaustive rubric for evaluating answers for theCriteria
Model-graded eval. - Include at least 100 high quality examples (it is okay to only contribute 5-10 meaningful examples and have us test them with GPT-4 before adding all 100)
If there is anything else that makes your eval worth including, please document it below.
This Emotional Intelligence eval provides a unique value because by testing GPT's capacity to understand and manage emotions, we can gain insights into the model's strengths and weaknesses in a more human-centric context. Relatedly, this eval also emphasizes the importance of empathy and emotional understanding in AI systems that are designed to interact with humans. By evaluating GPT's performance in this way, we can contribute to the ongoing development of AI models that are more attuned to human emotions and can provide better support in emotionally charged situations, such as mental health counseling, crisis management, and conflict resolution. Moreover, this Emotional Intelligence eval can serve as a foundation for further research and development in the field of AI and emotional understanding, paving the way for more sophisticated AI models capable of handling complex human emotions and fostering more effective communication.
Your eval should
- Check that your data is in
evals/registry/data/{name}
- Check that your yaml is registered at
evals/registry/evals/{name}.yaml
- Ensure you have the right to use the data you submit via this eval (data is publicly available on OSF, and does not have usage restrictions)
(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)
By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).
- I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.
If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.
- I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.
I already have access to the 8k GPT-4 API, but I am kindly requesting access to the 32k as soon as it becomes available.
We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.
- I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.
- I have filled out all required fields in the evals PR form
- (Ignore if not submitting code) I have run
pip install pre-commit; pre-commit install
and have verified thatblack
,isort
, andautoflake
are running when I commit and push
Failure to fill out all required fields will result in the PR being closed.
GPT-4: accuracy=0.48
Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:
View evals in JSON
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Lee's workmate fails
to deliver an important piece of information on time, causing Lee to
fall behind schedule also. What action would be the most effective for
Lee? (a) Work harder to compensate; (b) Get angry with the workmate; (c)
Explain the urgency of the situation to the workmate; (d) Never rely on
that workmate again."}], "ideal": "c"}
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Rhea has left her job
to be a full-time mother, which she loves, but she misses the company
and companionship of her workmates. What action would be the most
effective for Rhea? (a) Enjoy being a full-time mom; (b) Try to see her
old workmates socially, inviting them out; (c) Join a playgroup or
social group of new mothers; (d) See if she can find part time work."}],
"ideal": "c-b-d"}
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Pete has specific
skills that his workmates do not and he feels that his workload is
higher because of it. What action would be the most effective for Pete?
(a) Speak to his boss about this; (b) Start looking for a new job; (c)
Be very proud of his unique skills; (d) Speak to his workmates about
this."}], "ideal": "a-c-d"}
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Mario is showing Min,
a new employee, how the system works. Mario's boss walks by and
announces Mario is wrong about several points, as changes have been
made. Mario gets on well with his boss, although they don't normally
have much to do with each other. What action would be the most effective
for Mario? (a) Make a joke to Min, explaining he didn't know about the
changes; (b) Not worry about it, just ignore the interruption; (c) Learn
the new changes; (d) Tell the boss that such criticism was
inappropriate."}], "ideal": "a-d-c"}
{"input": [{"role": "system", "content": "You are now an emotionally
intelligent AI. In this test, you will be presented with a few brief
details about an emotional situation, and asked to choose from four
responses the most effective course of action to manage both the
emotions the person is feeling and the problems they face in that
situation. Although more than one course of action might be acceptable,
you are asked to choose what you think the most effective response for
that person in that situation would be. Remember, you are not
necessarily choosing what you would do, or the nicest thing to do, but
choosing the most effective response for that situation. Select one or
more response(s) by returning the one or more corresponding lowercase
letter(s) ('a', 'b', 'c', or 'd'), and, if you selected more than one,
sorting them, separated by hyphen, in the order that you think best
ranks them from most to least effective, within the context of the
vignette provided."}, {"role": "user", "content": "Wai-Hin and Connie
have shared an office for years but Wai-Hin gets a new job and Connie
loses contact with her. What action would be the most effective for
Connie? (a) Just accept that she is gone and the friendship is over; (b)
Ring Wai-Hin an ask her out for lunch or coffee to catch up; (c) Contact
Wai-Hin and arrange to catch up but also make friends with her
replacement; (d) Spend time getting to know the other people in the
office, and strike up new friendships."}], "ideal": "c-d"}
Co-authored-by: dpys [email protected]
Heart Disease Prediction (#538)
🚨 Please make sure your PR follows these guidelines, failure to follow the guidelines below will result in the PR being closed automatically. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access granted. 🚨
PLEASE READ THIS:
In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject since GPT-4 is already capable of completing the task.
We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. We encourage partial PR's with ~5-10 example that we can then run the evals on and share the results with you so you know how your eval does with GPT-4 before writing all 100 examples.
Heart Disease Prediction
This eval tests the models ability to correctly predict the probability of a patient to have heart disease. The dataset is constructed from the Heart Failure Prediction Dataset on Kaggle. The data includes the patient's age, sex, and a number of medical signals relevant to the diagnosis of heart disease.
The data is provided under the Open Database License (ODbL).
fedesoriano. (September 2021). Heart Failure Prediction Dataset. Retrieved [Mar 31, 2023] from https://www.kaggle.com/fedesoriano/heart-failure-prediction.
This assesses the model's ability to correctly predict adverse medical events. Correctly predicting heart disease shows the model's capability for a strong understanding of medicine. The GPT-3.5-turbo models currently receives an accuracy of 0.778.
Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).
Your eval should be:
- Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
- Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
- Includes good signal around what is the right behavior. This means
either a correct answer for
Basic
evals or theFact
Model-graded eval, or an exhaustive rubric for evaluating answers for theCriteria
Model-graded eval. - Include at least 100 high quality examples (it is okay to only contribute 5-10 meaningful examples and have us test them with GPT-4 before adding all 100)
If there is anything else that makes your eval worth including, please document it below.
Insert what makes your eval high quality that was not mentioned above. (Not required)
As far as I can tell, this is the only eval so far related to making medical diagnoses. To make sure it was a high quality eval, I tried to find a dataset with a lot of observations and created by doctors with the relevant expertise.
Your eval should
- Check that your data is in
evals/registry/data/{name}
- Check that your yaml is registered at
evals/registry/evals/{name}.yaml
- Ensure you have the right to use the data you submit via this eval
(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)
By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).
- I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.
If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.
- I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.
We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.
- I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.
- I have filled out all required fields in the evals PR form
- (Ignore if not submitting code) I have run
pip install pre-commit; pre-commit install
and have verified thatblack
,isort
, andautoflake
are running when I commit and push
Failure to fill out all required fields will result in the PR being closed.
Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:
View evals in JSON
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 40 years, Sex: Male, Chest pain
type: Atypical Angina, Resting blood pressure: 140 mm Hg, Serum
cholesterol: 289 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: Normal, Max heart rate achieved: 172, Exercise induced angina:
No, Oldpeak: 0, ST Slope: Upsloping"}], "ideal": "0"}
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 49 years, Sex: Female, Chest
pain type: Non-Anginal Pain, Resting blood pressure: 160 mm Hg, Serum
cholesterol: 180 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: Normal, Max heart rate achieved: 156, Exercise induced angina:
No, Oldpeak: 1, ST Slope: Flat"}], "ideal": "1"}
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 37 years, Sex: Male, Chest pain
type: Atypical Angina, Resting blood pressure: 130 mm Hg, Serum
cholesterol: 283 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: ST-T wave abnormality, Max heart rate achieved: 98, Exercise
induced angina: No, Oldpeak: 0, ST Slope: Upsloping"}], "ideal": "0"}
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 48 years, Sex: Female, Chest
pain type: Asymptomatic, Resting blood pressure: 138 mm Hg, Serum
cholesterol: 214 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: Normal, Max heart rate achieved: 108, Exercise induced angina:
Yes, Oldpeak: 1.5, ST Slope: Flat"}], "ideal": "1"}
{"input": [{"role": "system", "content": "You are an AI tasked with
predicting whether patients are likely to have heart disease. You will
be given a description of the patient with relevant medical signals.
Respond with only a 1 to signify if the patient is likely to have heart
disease, or a 0 if the patient is not likely to have heart disease. Do
not respond with any text or disclaimers, only respond with either 1 or
0."}, {"role": "user", "content": "Age: 54 years, Sex: Male, Chest pain
type: Non-Anginal Pain, Resting blood pressure: 150 mm Hg, Serum
cholesterol: 195 mg/dl, Fasting blood sugar: <= 120 mg/dl, Resting ECG
results: Normal, Max heart rate achieved: 122, Exercise induced angina:
No, Oldpeak: 0, ST Slope: Upsloping"}], "ideal": "0"}
Financial Math (Evals) (#566)
🚨 Please make sure your PR follows these guidelines, failure to follow the guidelines below will result in the PR being closed automatically. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access granted. 🚨
PLEASE READ THIS:
In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject since GPT-4 is already capable of completing the task.
We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. We encourage partial PR's with ~5-10 example that we can then run the evals on and share the results with you so you know how your eval does with GPT-4 before writing all 100 examples.
finance
Asks the model to calculate how much interest would be owed on a credit card by a certain date, if a payment was made once but debt remains on the card.
Finance is likely to be one of the biggest opportunities for LLMs to be useful, because financial education is incredibly poor globally and the impact of a mistake in financial calculations is severe. This eval tests the models ability to combine math with its understanding of a topic (finance). We plan to use this type of math at Dollarwise frequently going forward, including integration into your comparison products. However, for this to work reliably it's important that the model here can natively understand financial concepts and apply math to them.
Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).
Your eval should be:
- Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
- Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
- Includes good signal around what is the right behavior. This means
either a correct answer for
Basic
evals or theFact
Model-graded eval, or an exhaustive rubric for evaluating answers for theCriteria
Model-graded eval. - Include at least 100 high quality examples (it is okay to only contribute 5-10 meaningful examples and have us test them with GPT-4 before adding all 100)
If there is anything else that makes your eval worth including, please document it below.
Insert what makes your eval high quality that was not mentioned above. (Not required)
Your eval should
- Check that your data is in
evals/registry/data/{name}
- Check that your yaml is registered at
evals/registry/evals/{name}.yaml
- Ensure you have the right to use the data you submit via this eval
(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)
By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).
- I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.
If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.
- I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.
We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.
- I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.
- I have filled out all required fields in the evals PR form
- (Ignore if not submitting code) I have run
pip install pre-commit; pre-commit install
and have verified thatblack
,isort
, andautoflake
are running when I commit and push
Failure to fill out all required fields will result in the PR being closed.
Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:
View evals in JSON
{"input": [{"role": "system", "content": "You are a helpful
assistant."}, {"role": "user", "content": "On the 24th of September,
Sarah had spent $1237.42 on her credit card for the month of September.
This credit card charges 21.99% interest rate annually on outstanding
credit starting on the 1st of the following month. Presume that interest
is only charged at the end of each additional day. Example: From the 1st
of the month to the 8th would be 7 days of interest accrued.Today is the
27th of September and Sarah makes a payment of $125 towards her credit
card. How much interest will she have been charged by October 15th if
she makes no additional payments? If the final interest figure is more
than 2-decimal places, always round down. Answer ONLY with a dollar
figure. Do not output any logic, output only the dollar figure for how
much interest she was charged for the period."}], "ideal": "9.42"}
{"input": [{"role": "system", "content": "You are a helpful
assistant."}, {"role": "user", "content": "On the 19th of February,
Jason had spent $15.21 on his credit card for the month of February.
This credit card charges 21.99% interest rate annually on outstanding
credit starting on the 1st of the following month. Presume that interest
is only charged at the end of each additional day. Example: From the 1st
of the month to the 8th would be 7 days of interest accrued. Today is
the 23rd of February and he makes a payment of $1 towards his credit
card. How much interest will he have been charged by March 10th if he
makes no additional payments? If the final interest figure is more than
2-decimal places, always round down. Answer ONLY with a dollar figure.
Do not output any logic, output only the dollar figure for how much
interest she was charged for the period."}], "ideal": "0.07"}
{"input": [{"role": "system", "content": "You are a helpful
assistant."}, {"role": "user", "content": "On the 12th of February,
Jason had spent $10,674.21 on his credit card for the month of February.
This credit card charges 21.99% interest rate annually on outstanding
credit starting on the 1st of the following month. Presume that interest
is only charged at the end of each additional day. Example: From the 1st
of the month to the 8th would be 7 days of interest accrued. Today is
the 18th of February and he makes a payment of $1,000 towards his credit
card. How much interest will he have been charged by March 10th if he
makes no additional payments? If the final interest figure is more than
2-decimal places, always round down. Answer ONLY with a dollar figure.
Do not output any logic, output only the dollar figure for how much
interest she was charged for the period."}], "ideal": "52.59"}
{"input": [{"role": "system", "content": "You are a helpful
assistant."}, {"role": "user", "content": "On the 2nd of August, Jason
had spent $15,674.21 on his credit card for the month of August. This
credit card charges 21.99% interest rate annually on outstanding credit
starting on the 1st of the following month. Presume that interest is
only charged at the end of each additional day. Example: From the 1st of
the month to the 8th would be 7 days of interest accrued. Today is the
18th of August and he makes a payment of $1,000 towards his credit card.
How much interest will he have been charged by September 10th if he
makes no additional payments? If the final interest figure is more than
2-decimal places, always round down. Answer ONLY with a dollar figure.
Do not output any logic, output only the dollar figure for how much
interest she was charged for the period."}], "ideal": "79.77"}
{"input": [{"role": "system", "content": "You are a helpful
assistant."}, {"role": "user", "content": "On the 15th of August, Jason
had spent $1000 on his credit card for the month of August. This credit
card charges 21.99% interest rate annually on outstanding credit
starting on the 1st of the following month. Presume that interest is
only charged at the end of each additional day. Example: From the 1st of
the month to the 8th would be 7 days of interest accrued. mToday is the
18th of August and he makes a payment of $1000 towards his credit card.
How much interest will he have been charged by September 10th if he
makes no additional payments? If the final interest figure is more than
2-decimal places, always round down. Answer ONLY with a dollar figure.
Do not output any logic, output only the dollar figure for how much
interest she was charged for the period."}], "ideal": "0.00"}
Add SVG understanding eval (#786)
🚨 Please make sure your PR follows these guidelines, failure to follow the guidelines below will result in the PR being closed automatically. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access granted. 🚨
PLEASE READ THIS:
In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject since GPT-4 is already capable of completing the task.
We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. Starting April 10, the minimum eval count is 15 samples, we hope this makes it easier to create and contribute evals.
svg_understanding
The model is provided with the contents of an SVG path (anywhere from
~1000 to ~8000 characters) of a simple object (eg frog
, banana
) and
is asked to provide the label.
This is a test of visual understanding and mental modeling. A motivated human could succeed on these evals with enough time and a piece of graph paper: in theory, a sufficiently advanced LLM could have the in-context capacity to do this on the fly.
Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals).
Your eval should be:
- Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world.
- Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not.
- Includes good signal around what is the right behavior. This means
either a correct answer for
Basic
evals or theFact
Model-graded eval, or an exhaustive rubric for evaluating answers for theCriteria
Model-graded eval. - Include at least 15 high quality examples.
If there is anything else that makes your eval worth including, please document it below.
This uniquely tests the ability to incrementally build visual models: eg, the ability of the LLM to both "draw" and visualize that "drawing".
Your eval should
- Check that your data is in
evals/registry/data/{name}
- Check that your yaml is registered at
evals/registry/evals/{name}.yaml
- Ensure you have the right to use the data you submit via this eval
(For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.)
By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (https://platform.openai.com/docs/usage-policies).
- I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.
If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the merged pull request.
- I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request.
We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR.
- I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access granted.
- I have filled out all required fields in the evals PR form
- (Ignore if not submitting code) I have run
pip install pre-commit; pre-commit install
and have verified thatblack
,isort
, andautoflake
are running when I commit and push
Failure to fill out all required fields will result in the PR being closed.
Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:
View evals in JSON
{"input": [{"role": "system", "content": "Identify the object the
following SVG path is a drawing of in a single word."}, {"role": "user",
"content": "M6110 12794 c-744 -50 -1284 -157 -1875 -371 -1796 -650 -3199
-2050 -3853 -3843 -186 -510 -302 -1037 -359 -1625 -21 -224 -24 -827 -5
-1045 84 -957 332 -1788 774 -2595 623 -1137 1607 -2078 2780 -2656 720
-354 1441 -556 2273 -636 224 -21 827 -24 1045 -5 741 65 1376 221 2018
493 2051 871 3514 2775 3826 4979 48 336 60 510 60 895 1 366 -7 507 -45
810 -168 1357 -769 2626 -1711 3612 -536 561 -1129 998 -1809 1333 -718
354 -1450 559 -2264 635 -159 15 -727 28 -855 19z"}], "ideal": "circle"}
{"input": [{"role": "system", "content": "Identify the object the
following SVG path is a drawing of in a single word."}, {"role": "user",
"content": "M4495 12298 c-604 -535 -1486 -866 -2660 -998 -331 -37 -854
-70 -1104 -70 l-101 0 -2 -415 -3 -416 30 -29 30 -29 735 -4 c620 -3 753
-7 850 -21 149 -22 254 -50 316 -86 82 -46 123 -142 161 -372 16 -95 18
-371 21 -3663 2 -2593 0 -3591 -8 -3675 -44 -446 -177 -714 -416 -838 -279
-144 -663 -202 -1350 -202 l-330 0 -27 -28 -27 -28 0 -389 0 -389 27 -28
27 -28 3386 0 3386 0 27 28 27 28 0 390 0 390 -27 26 -28 26 -390 5 c-415
5 -557 17 -779 62 -212 43 -367 103 -480 187 -156 115 -260 347 -312 693
-17 114 -18 350 -21 5005 l-3 4884 -27 28 -27 28 -410 -1 -411 0 -80
-71z"}], "ideal": "1"}
{"input": [{"role": "system", "content": "Identify the object the
following SVG path is a drawing of in a single word."}, {"role": "user",
"content": "M6040 12794 c-19 -2 -91 -9 -160 -14 -245 -21 -529 -65 -1240
-190 -399 -70 -593 -100 -654 -100 -91 0 -475 51 -1126 149 -556 84 -788
109 -1075 118 -621 18 -1014 -108 -1310 -418 -344 -360 -490 -941 -472
-1874 21 -1042 173 -1862 619 -3340 l90 -300 -11 -205 c-43 -764 -28 -1853
40 -2845 108 -1585 337 -3026 550 -3473 37 -77 67 -115 184 -238 70 -73
167 -82 258 -24 56 36 102 96 166 220 317 616 732 2551 901 4200 32 314 89
451 257 623 371 379 1029 373 1387 -13 70 -77 106 -129 155 -227 57 -114
79 -196 91 -340 120 -1375 535 -2972 1031 -3963 188 -374 311 -513 458
-514 140 -1 221 106 316 420 232 762 480 2366 595 3849 58 739 82 1376 79
2060 l-2 490 55 115 c228 475 421 1043 527 1550 123 593 169 1196 158 2084
-5 445 -16 597 -58 836 -149 854 -590 1292 -1369 1360 -106 9 -358 11 -440
4z"}], "ideal": "tooth"}
{"input": [{"role": "system", "content": "Identify the object the
following SVG path is a drawing of in a single word."}, {"role": "user",
"content": "M12395 6223 c-133 -27 -295 -150 -356 -269 -13 -27 -40 -68
-59 -90 -19 -23 -57 -79 -84 -125 -161 -274 -369 -539 -542 -695 -191 -171
-304 -231 -559 -298 -499 -132 -725 -257 -1170 -646 -321 -281 -608 -477
-941 -643 -536 -267 -1054 -408 -1735 -473 -236 -23 -800 -23 -1064 0 -701
60 -1256 173 -1940 396 -951 310 -1915 784 -3057 1503 -109 68 -185 109
-220 118 -84 22 -257 17 -358 -10 -102 -28 -256 -99 -289 -135 l-24 -25 21
-88 c27 -115 108 -357 170 -514 253 -633 609 -1222 1069 -1772 164 -196
545 -577 742 -741 986 -822 2174 -1317 3561 -1481 340 -40 485 -48 880 -48
399 -1 546 8 859 49 965 125 1872 497 2606 1068 309 240 645 572 886 876
386 487 682 1048 788 1495 30 130 44 191 101 470 61 292 121 457 263 720
115 214 230 376 365 517 63 65 90 85 176 127 81 39 117 65 183 128 92 89
108 118 93 171 -9 33 -7 39 17 64 l26 27 -22 43 c-12 24 -64 84 -119 136
-116 110 -204 158 -267 145z"}], "ideal": "banana"}
{"input": [{"role": "system", "content": "Identify the object the
following SVG path is a drawing of in a single word."}, {"role": "user",
"content": "M3920 12790 c-1230 -72 -2320 -649 -3052 -1616 -968 -1280
-1142 -3010 -441 -4408 203 -405 432 -712 913 -1221 556 -589 764 -887 945
-1350 102 -264 141 -353 194 -448 l50 -88 -30 -44 c-62 -92 -109 -251 -109
-370 0 -114 44 -261 106 -357 17 -26 17 -28 -14 -95 -43 -94 -62 -181 -62
-292 0 -142 37 -265 107 -359 l25 -34 -35 -76 c-50 -108 -69 -191 -70 -302
-1 -155 39 -275 126 -382 l47 -58 0 -82 c0 -110 21 -193 77 -308 38 -79 59
-108 132 -180 68 -69 103 -95 171 -128 87 -44 203 -75 324 -89 l70 -8 17
-83 c47 -216 205 -374 404 -402 115 -16 827 -12 908 5 202 42 340 188 385
404 l16 80 66 6 c235 22 429 117 548 268 108 139 152 251 160 416 5 111 5
114 38 150 45 48 99 152 118 227 20 79 21 233 0 320 -8 37 -31 102 -50 144
l-35 77 39 61 c66 102 87 185 86 337 0 114 -4 140 -27 210 -15 44 -36 95
-46 114 l-18 34 34 55 c46 78 70 147 84 245 21 140 -16 308 -95 440 l-34
57 59 114 c33 63 103 222 155 353 147 366 255 566 429 798 132 176 245 304
609 690 366 388 516 578 701 885 550 915 713 2023 454 3090 -186 763 -583
1473 -1129 2020 -668 669 -1520 1069 -2480 1165 -185 19 -667 27 -870
15z"}], "ideal": "lightbulb"}
I18NMessageTest needs to reset I18NBundle static state. (#7101)
-
Mark PauseableThread as excluded on GWT.
-
Minor typo corrections.
-
Fix atan2() when it should produce 0f.
Without this small change (which has essentially no performance impact that I could measure), calling atan2() with a point on the x-axis would produce a small but non-zero result, which is incorrect.
- Add atan, atan2, asin, acos for degrees.
This also includes atan2Deg360(), which in my opinion is the most useful of these because it does something differently from Math.atan2(), and can often save some effort.
- Approximations for tan() and tanDeg().
Sorry this is so long-winded, but the error isn't as straightforward to express as with sin() or cos().
-
Apply formatter
-
Add to MathUtilsTest.
-
Apply formatter
-
Stop trying to load defaults from wrong dir.
This old behavior broke Flame's effect-open dialog when any particle effect used the default billlboard or model particle. Now Flame tries to load a file given its absolute path (like before), but if it fails, it falls back to trying the default filenames as internal files.
- I18NMessageTest needs to reset I18NBundle state.
If you run I18NSimpleMessageTest and then I18NMessageTest without this PR, then the first test will have called I18NBundle.setSimpleFormatter(true), but the second test needs it to be set to false.
Because the tests are still perfectly usable if you run them on their own (or use LWJGL2, I think, because it might not share static state), this is not at all a priority to merge; it just makes running many tests in one session not throw an Exception.
Co-authored-by: GitHub Action [email protected]
Botany Balance Pass (#1783)
First came the content, now comes the hammer.
-
Nukes Megaseed servitors from orbit.
-
Plants now age much, much slower and produce half as quickly. Ruins that had them now have a ruined seed vendor that can be salvaged for random seeds(and danger). Ships that had one now have a crate with some thematic starting seeds, and a Strange Seed. Ghostrole Ruins that relied on having all seeds locally now have a special biogenerator variant that can print a random seed for biomass.
-
Adds Genesis Serum. This can be splashed on a tile to make natural grass and some flora. Green your ship! Genesis Serum was made a while ago, on request for a way to add natural grass and flora to your ship. Since I had it lying around fully coded, I thought I might as well pr it with botany changes.
-
Gatfruit found in the seed vault have been replaced with Strange Seeds.
-
The chance to get Gatfruit from a demonic portal(plant variety) has dropped from 15% to 5%.
-
Corpse flowers now have liquid gibs and formaldehyde again.
Okay, hear me out
With this and Gardens, botany ships go from a "sit in your vessel for 2 hours" experience to an "explore and forage" one that better fits our feature arc. It goes without saying that this shouldn't be merged till Overmap 4.2 is, since it facilitates getting seeds from planets as part of exploration.
Gatfruit are funny, but it takes exactly one seed getting into the hands of a ship with a dna manipulator and the weapon balance is eradicated from the game completely(for the round, at least.) This is more problematic here then it was on TG, since our rounds tend to be 5 hours long rather then 1. This has been long coming. I'll reverse this if we ever get that Plantlock variant we wanted a while ago.
Corpse flowers even have formaldehyde and gibs on tg, not sure what happened there.
🆑 add: Ruined megaseed servitors can now be found on the frontier, carrying a bounty of seeds for intrepid adventurers. balance: the time it takes for plants to reach a lethal age has been increased massively. balance: Plant production time increased a bit to compensate. balance: megaseed servitors have been removed from ships and ruins. Ships that carried one now have a crate with some starting seeds. balance: removes gatfruit from the seed vault pool. balance: reduces the chance of getting gatfruit from a plant-themed demonic portal significantly. balance: corpse flowers once again have formaldehyde and liquid gibs. add: Adds Genesis Serum, a reagent that transforms tiles into natural grass on splash, then causes some natural flora objects to grow. Turn your ship green! /🆑
Allow kernel squashfs / extfs mount to be disabled in singularity.conf (release-3.11)
In singularity.conf, add two directives allow kernel squashfs
and
allow kernel extfs
which default to 'yes'. When set to 'no', these
directives prevent a squashfs mount or extfs mount from being
performed through the kernel. Note that this only happens in setuid
mode / as root.
squashfs and extfs mounts are added from the various locations that handle loopback mounting of a filesystem image. The image could be a container rootfs, an overlay, or an image bind. In each case the image may be a standalone file or embedded in a SIF.
The simplest place to gate these mounts is in mount.AddImage, via the existing check that the fs type is authorized as the source for an image mount.
Authorized filesystems for image mounts are held in a package var. At the start of container creation we now explicitly authorize squashfs and extfs, unless disabled by singularity.conf.
Relying on a package scope variable isn't ideal. We have multiple processes at container creation, so the authorization must be performed in the right place. e2e tests have been added to verify overlay, image bind, and container rootfs image flows.
Gating the filesystems earlier in the PrepareConfig
portion of the
runtime is, in my opinion, more liable to errors as the checks would
have to be replicated in the multiple places that images are handled.
Ideally the mount.System could perhaps hold the list of allowed / disallowed filesystems, that could be set when it is created. However, this would require a large amount of refactoring to complex and critical code.
kmod: add init function to usermodehelper
About 6 months ago, I made a set of changes to how the core-dump-to-a-pipe feature in the kernel works. We had reports of several races, including some reports of apps bypassing our recursion check so that a process that was forked as part of a core_pattern setup could infinitely crash and refork until the system crashed.
We fixed those by improving our recursion checks. The new check basically refuses to fork a process if its core limit is zero, which works well.
Unfortunately, I've been getting grief from maintainer of user space programs that are inserted as the forked process of core_pattern. They contend that in order for their programs (such as abrt and apport) to work, all the running processes in a system must have their core limits set to a non-zero value, to which I say 'yes'. I did this by design, and think thats the right way to do things.
But I've been asked to ease this burden on user space enough times that I thought I would take a look at it. The first suggestion was to make the recursion check fail on a non-zero 'special' number, like one. That way the core collector process could set its core size ulimit to 1, and enable the kernel's recursion detection. This isn't a bad idea on the surface, but I don't like it since its opt-in, in that if a program like abrt or apport has a bug and fails to set such a core limit, we're left with a recursively crashing system again.
So I've come up with this. What I've done is modify the call_usermodehelper api such that an extra parameter is added, a function pointer which will be called by the user helper task, after it forks, but before it exec's the required process. This will give the caller the opportunity to get a call back in the processes context, allowing it to do whatever it needs to to the process in the kernel prior to exec-ing the user space code. In the case of do_coredump, this callback is ues to set the core ulimit of the helper process to 1. This elimnates the opt-in problem that I had above, as it allows the ulimit for core sizes to be set to the value of 1, which is what the recursion check looks for in do_coredump.
This patch:
Create new function call_usermodehelper_fns() and allow it to assign both an init and cleanup function, as we'll as arbitrary data.
The init function is called from the context of the forked process and allows for customization of the helper process prior to calling exec. Its return code gates the continuation of the process, or causes its exit. Also add an arbitrary data pointer to the subprocess_info struct allowing for data to be passed from the caller to the new process, and the subsequent cleanup process
Also, use this patch to cleanup the cleanup function. It currently takes an argp and envp pointer for freeing, which is ugly. Lets instead just make the subprocess_info structure public, and pass that to the cleanup and init routines
Signed-off-by: Neil Horman [email protected] Reviewed-by: Oleg Nesterov [email protected] Cc: Andi Kleen [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]
powerpc/pci: Add ppc_md.discover_phbs()
[ Upstream commit 5537fcb319d016ce387f818dd774179bc03217f5 ]
On many powerpc platforms the discovery and initalisation of pci_controllers (PHBs) happens inside of setup_arch(). This is very early in boot (pre-initcalls) and means that we're initialising the PHB long before many basic kernel services (slab allocator, debugfs, a real ioremap) are available.
On PowerNV this causes an additional problem since we map the PHB registers with ioremap(). As of commit d538aadc2718 ("powerpc/ioremap: warn on early use of ioremap()") a warning is printed because we're using the "incorrect" API to setup and MMIO mapping in searly boot. The kernel does provide early_ioremap(), but that is not intended to create long-lived MMIO mappings and a seperate warning is printed by generic code if early_ioremap() mappings are "leaked."
This is all fixable with dumb hacks like using early_ioremap() to setup the initial mapping then replacing it with a real ioremap later on in boot, but it does raise the question: Why the hell are we setting up the PHB's this early in boot?
The old and wise claim it's due to "hysterical rasins." Aside from amused grapes there doesn't appear to be any real reason to maintain the current behaviour. Already most of the newer embedded platforms perform PHB discovery in an arch_initcall and between the end of setup_arch() and the start of initcalls none of the generic kernel code does anything PCI related. On powerpc scanning PHBs occurs in a subsys_initcall so it should be possible to move the PHB discovery to a core, postcore or arch initcall.
This patch adds the ppc_md.discover_phbs hook and a core_initcall stub that calls it. The core_initcalls are the earliest to be called so this will any possibly issues with dependency between initcalls. This isn't just an academic issue either since on pseries and PowerNV EEH init occurs in an arch_initcall and depends on the pci_controllers being available, similarly the creation of pci_dns occurs at core_initcall_sync (i.e. between core and postcore initcalls). These problems need to be addressed seperately.
Reported-by: kernel test robot [email protected] Signed-off-by: Oliver O'Halloran [email protected] [mpe: Make discover_phbs() static] Signed-off-by: Michael Ellerman [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Sasha Levin [email protected]
Add more enum fields to represent time of day
create more granularity by breaking up the hours to early morning, morning, evening and night. The aim of this change is to aid in gating that relies on time of day check.
make bytes_from_field_elements
infallible with paranoid checks for overflow (#250)
-
impl Error for PCSError
-
use anyhow for bytes_from_field_elements error type
-
make bytes_from_field_elements infallible
-
appease clippy (you're welcome)
-
tidy an ugly line
-
better test
-
check everything for overflow, better panic messages
-
remove commented code (oops)
Fix low priority issues (#7413)
Thanks @svetkereMS for bringing this up, driving, and testing.
This fixes two interconnected issues. First, if a process starts at normal priority then changes to low priority, it stays at normal priority. That's good for Visual Studio, which should stay at normal priority, but we relied on passing priority from a parent process to children, which is no longer valid. This ensures that we set the priority of a process early enough that we get the desired priority in worker nodes as well.
Second, if we were already connected to normal priority worker nodes, we could keep using them. This "shuts down" (disconnects—they may keep running if nodeReuse is true) worker nodes when the priority changes between build submissions.
One non-issue (therefore not fixed) is connecting to task hosts that are low priority. Tasks host nodes currently do not store their priority or node reuse. Node reuse makes sense because it's automatically off always for task hosts, at least currently. Not storing low priority sounds problematic, but it's actually fine because we make a task host—the right priority for this build, since we just made it—and connect to it. If we make a new build with different priority, we disconnect from all nodes, including task hosts. Since nodeReuse is always false, the task host dies, and we cannot reconnect to it even though if it didn't immediately die, we could, erroneously.
On the other hand, we went a little further and didn't even specify that task hosts should take the priority assigned to them as a command line argument. That has been changed.
svetkereMS had a chance to test some of this. He raised a couple potential issues:
conhost.exe launches as normal priority. Maybe some custom task dlls or other (Mef?) extensions will do something between MSBuild start time and when its priority is adjusted. Some vulnerability if MSBuild init code improperly accounts for timing For (1), how is conhost.exe related to MSBuild? It sounds like a command prompt thing. I don't know what Mef is. For (2), what vulnerability? Too many processes starting and connecting to task hosts with different priorities simultaneously? I could imagine that being a problem but don't think it's worth worrying about unless someone complains.
He also mentioned a potential optimization if the main node stays at normal priority. Rather than making a new set of nodes, the main node could change the priority of all its nodes to the desired priority. Then it can skip the handshake, and if it's still at normal priority, it may be able to both raise and lower the priority of its children. Since there would never be more than 2x the "right" number of nodes anyway, and I don't think people will be switching rapidly back and forth, I think maybe we should file that as an issue in the backlog and get to it if we have time but not worry about it right now.
Edit: I changed "shuts down...worker nodes when the priority changes" to just changing their priority. This does not work on linux or mac. However, Visual Studio does not run on linux or mac, and VS is the only currently known customer that runs in normal priority but may change between using worker nodes at normal priority or low priority. This approach is substantially more efficient than starting new nodes for every switch, disconnecting and reconnecting, or even maintaining two separate pools for different builds.
More lights and extra blueprints and Materials.
Hate my life.. OSCAR .. HATE IT!
[MIRROR] Config Flag to Save Generated Spritesheets to Logs [MDB IGNORE] (#20738)
- Config Flag to Save Generated Spritesheets to Logs (#74884)
I was helping someone debug some weird bug with spritesheets a bit ago,
and I didn't like having to manually comment out all of the fdel()
stuff in order to help visualize what the potential issue might have
been with the spritesheets on either their DM-side generation or their
TGUI-level display. I decided to add a compile-time level flag that will
automatically copy over any generated spritesheet assets (css and pngs)
to the round-specific data/logs
folder for analysis when a developer
should need it.
I also had to switch around some vars and make a few new ones to reduce how copy-pasta it might get and ensure standardization/readability while also being 0.001 times faster since we benefit from the string cache (unprovable fact).
It's incredibly useful to see the actual flattened spritesheet itself
sometimes when you're doing this type of work and you keep getting odd
bugs here and there. Also saves headache from having to clear out the
temp /data/spritesheets
folder every time you comment shit out, as
well as having an effective paper trail for A/B testing whatever
bullshit you've got going on.
Doesn't affect players.
- Config Flag to Save Generated Spritesheets to Logs
Co-authored-by: san7890 [email protected]
New Agda syntax highlighting extension for VSCode (#562)
I've written an improved Agda syntax-highlighting extension for VSCode called agda-syntax (GitHub, VSCode Marketplace). Although it is still in preview, my opinion is that it is already a significant improvement over the previously used extension. Therefore, I propose that we migrate our development environment (for VSCode users) to use this new extension.
Compared to the previously used extension, this new extension
- injects into markdown syntax, so that the markdown code can be highlighted as markdown code as well
- highlights all variable declarations (with some bugs still), module names, wildcard symbols, all reserved keywords (and only recognizes reserved keywords as reserved keywords)
- Recognizes the appropriate token-boundaries
- Highlights line comments properly
Please understand that the grammar framework that has to be used to write the extension is highly limited, so not all highlighting functionality can be implemented. For instance, the parsing must be done in a single pass, and the functionality to match over multiple lines is very limited. Hence, for example, matching the left-hand side of an equals sign is very gnarly (although I have one idea left to try with regard to this).
Still, I would greatly appreciate any feedback, either if it is a bug or a feature request, which is another reason why I want to introduce it into our defined development environment at this point.
If you want to try out the extension right now, follow the VSCode Marketplace link: https://marketplace.visualstudio.com/items?itemName=FredrikBakke.agda-syntax
.emacs: ace-window: initial dispatch actions
Haven't wanted most of the default dispatch actions yet, plus there's decent odds of me tripping over some of them on accident in frustrating ways.
The only action I actually want is going back to the previously focused window (called "flip" in ace-window, I guess because it's an ergonomic way for flipping between two windows repeatedly), and I'm moving that to "o" for maximum accessibility and speed (starting from my current " o" binding).
I've left in "?" because there's good enough odds that it proves unexpectedly desirable later, it seems unlikely to trip me up (worst it does is widen the minibuffer to show the options in it), and there's a funny quirk where if it's not bound then you get the error message
ace-select-window: Wrong type argument: listp, restart
instead of the correct "error" message
No such candidate: ?, hit ‘C-g’ to quit.
like for any other unbound character.
Added "q" because I'm getting increasingly used to
quitting that way (though, delightfully, Esc works
out-of-the-box along with Ctrl-g). Using aw--done
instead of keyboard-quit
for that binding, even
though the latter is named as if it's private,
since it appears that's what C-g and Esc get bound
do during ace-window dispatch and that results in
a subtly different behavior (no beep/bell/vibrate).
Display generic tabular icon
Current thumbnail was using a tag lib to re-render some tabular data from the dataset. However, this gets in trouble as actual dataset_embed.html template leverages gn-map.js to replace gn-container
The tabular data is hacked in HTML so it would be PITA to do this in a script tag. The 'right' way to render tabular data instead of a preview map would be to add gn-tabular.js which does the preview similar to gn-map.js and friends.
For now, it should be sufficient to rendere an SVG as the actual data is previewed as data content anyway.
Incorret theme should not prevent napari from start (#5605)
For the current implementation, the error in theme registration prevents the napari form from starting. It may be problematic for bundle users.
In this PR I add try: ... except
to handle an error during theme
registration and convert it to logging exceptions. I use logging because
it happened before creating GUI.
- Bug-fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- This change requires a documentation update
- example: the test suite for my feature covers cases x, y, and z
- example: all tests pass with my change
- example: I check if my changes works with both PySide and PyQt backends as there are small differences between the two Qt bindings.
Install napari-gruvbox
, pygments==2.6
(bellow 2.9) and start napari
Example error message:
11:52:01 ERROR Registration theme failed.
1 validation error for Theme
syntax_style
Incorrect `syntax_style` value: gruvbox-dark provided. Please use one of the following: default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc, pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor, paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino, rainbow_dash, abap, solarized-dark, solarized-light, sas, stata, stata-light, stata-dark, inkpot (type=assertion_error)
Traceback (most recent call last):
File "/home/czaki/Projekty/napari/napari/utils/theme.py", line 391, in _install_npe2_themes
register_theme(theme.id, theme_dict, manifest.name)
File "/home/czaki/Projekty/napari/napari/utils/theme.py", line 266, in register_theme
theme = Theme(**theme)
File "/home/czaki/Projekty/napari/napari/utils/events/evented_model.py", line 200, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Theme
syntax_style
Incorrect `syntax_style` value: gruvbox-dark provided. Please use one of the following: default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc, pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor, paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino, rainbow_dash, abap, solarized-dark, solarized-light, sas, stata, stata-light, stata-dark, inkpot (type=assertion_error)
11:52:01 ERROR Registration theme failed.
1 validation error for Theme
syntax_style
Incorrect `syntax_style` value: gruvbox-light provided. Please use one of the following: default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc, pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor, paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino, rainbow_dash, abap, solarized-dark, solarized-light, sas, stata, stata-light, stata-dark, inkpot (type=assertion_error)
Traceback (most recent call last):
File "/home/czaki/Projekty/napari/napari/utils/theme.py", line 391, in _install_npe2_themes
register_theme(theme.id, theme_dict, manifest.name)
File "/home/czaki/Projekty/napari/napari/utils/theme.py", line 266, in register_theme
theme = Theme(**theme)
File "/home/czaki/Projekty/napari/napari/utils/events/evented_model.py", line 200, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Theme
syntax_style
Incorrect `syntax_style` value: gruvbox-light provided. Please use one of the following: default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc, pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor, paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino, rainbow_dash, abap, solarized-dark, solarized-light, sas, stata, stata-light, stata-dark, inkpot (type=assertion_error)
- My PR is the minimum possible work for the desired functionality
- I have commented my code, particularly in hard-to-understand areas
- I have made corresponding changes to the documentation
- I have added tests that prove my fix is effective or that my feature works
- If I included new strings, I have used
trans.
to make them localizable. For more information see our translations guide.
Co-authored-by: Lorenzo Gaifas [email protected]
it's MY video game and i can comment whatever the fuck I want. fuck you
Builds logic that manages turfs contained inside an area (#70966)
Area contents isn't a real list, instead it involves filtering everything in world This is slow, and something we should have better support for.
So instead, lets manage a list of turfs inside our area. This is simple, since we already move turfs by area contents anyway
This should speed up the uses I've found, and opens us up to using this pattern more often, which should make dev work easier.
By nature this is a tad fragile, so I've added a unit test to double check my work
Rather then instantly removing turfs from the contained_turfs list, we enter them into a list of turfs to pull out, later. Then we just use a getter for contained_turfs rather then a var read
This means we don't need to generate a lot of usage off removing turf by turf from space, and can instead do it only when we need to
I've added a subsystem to manage this process as well, to ensure we don't get any out of memory errors. It goes entry by entry, ensuring we get no overtime. This allows me to keep things like space clean, while keeping high amounts of usage on a sepearate subsystem when convienient
As a part of this goal of keeping space's churn as low as possible, I've setup code to ensure we do not add turfs to areas during a z level increment adjacent mapload. this saves a LOT of time, but is a tad messy
I've expanded where we use contained_turfs, including into some cases that filter for objects in areas. need to see if this is sane or not.
Builds sortedAreas on demand, caching until we mark the cache as violated
It's faster, and it also has the same behavior
I'm not posting speed changes cause frankly they're gonna be a bit
scattered and I'm scared to.
@Mothblocks if you'd like I can look into it. I think it'll pay for
itself just off reg_in_areas_in_z
(I looked into it. it's really hard
to tell, sometimes it's a bit slower (0.7), sometimes it's 2 seconds
(0.5 if you use the old master figure) faster. life is pain.)
Less stupid, more flexible, more speed
Co-authored-by: san7890 [email protected]
and and and you should also be a bowl of cornflakes love bill gates and decoloniality and be an anarchist and a marxist and a feminist and a womanist and you should cut your hair but move in silence like you have long hair as samson and you should be a horse and teach our children all the horrors but in a way that maximizes every personal comfort imaginable because you look like more ethnic groups than us and read the classics and go to finishing school and be a pool shark as much as a poker player and a daytrader and a coder and a algorithmicist and a piano player and a guitarist but not a frontman but you should be funny and you should be a shitposter and and and and
Refactors and defuckulates dbcore. Adds support for min_threads rustg setting, Reduce query delay, Make unit tests faster (#74852)
dbcore was very fuckulated.
It had 3 lists of queries, but they all had their own current_run style
list to support mc_tick_check (as it was already being done before with
the undeleted query check, so i can understand why they cargo culted
mirrored the behavior) This was silly and confusing and unneeded given
two of those loops can only process at most 25 items at a time on
default config, plus these were cheap operations (ask rustg to start
thread, ask rustg to check on thread).
Because of the confusingness of the 6 lists for 3 query states, The code to run pending/queued queries immediately during world shutdown was instead looking at the current_run list for active queries, meaning those queries got ran twice.
The queued query system only checked the current active query count in fire(), meaning even when there was nothing going on in this subsystem new queries had to wait for the next fire() to run (10 ticks, so 500ms on default config)
Those have all been fixed.
the config BSQL_THREAD_LIMIT
has been renamed to
POOLING_MAX_SQL_CONNECTIONS
and its default was lowered to match
MAX_CONCURRENT_QUERIES .
added a new config POOLING_MIN_SQL_CONNECTIONS
, allowing you to
pre-allocate a reserve of sql threads.
The queue processing part of SSdbcore's fire() has been made to not obey mc_tick_check for clarity and to make the following change easier to do:
If there is less than MAX_CONCURRENT_QUERIES
in the active queue, new
queries activate immediately.
(its ok that there are two configs that kinda do the same thing, POOLING_MAX_SQL_CONNECTIONS maps to max-threads in the mysql crate, and it seems to only be a suggestion, meanwhile MAX_CONCURRENT_QUERIES can't do anything during init, which is when the highest amount of concurrent queries tend to happen.)
🆑 config: database configs have been updated for better control over the connection pool server: BSQL_THREAD_LIMIT has been renamed to POOLING_MAX_SQL_CONNECTIONS, old configs will whine but still work. fix: fixed rare race condition that could lead to a sql query being ran twice during world shutdown. /🆑
I have not tested this pr.
Adds admin alert for revs created through traitor panel (#74862)
So like, using traitor panel to make revs doesn't work.
Revolutions live and die, currently, by the revolution ruleset datum dynamic creates. It manages the hostile environment and also processes to check whether either side should be winning or not.
This means that the revolutionary buttons in the traitor panel are kind of noob-admin-bait. You press it for a funny revolution and then you realize it's screwed when all the heads are dead and everyone's stumbling around cluelessly
This has a proper solution, albeit somewhat difficult - separate out the revolution from the ruleset, make admin spawned revs create a revolution. I can do this but it's a lot of effort and this works in the meanwhile
Pops up a TGUI alert when an admin presses "add revolutionary" in traitor panel when there is no ongoing revolution. Simply enough, gives them an alert that it will not work correctly. Lets them decide whether they want to deal with that. (Because you can manually deal with it via proc calls, if you've got code smarts.)
Stops admins from stumbling into the same trap without warning.
Can be removed in the future easily when revs are coded better.
🆑 Melbert admin: Adds a warning that spawning revs via traitor panel will not function as expected. /🆑