1,819,764 events recorded by gharchive.org of which 1,819,764 were push events containing 2,762,138 commit messages that amount to 204,733,594 characters filtered with words.py@e23d022007... to these 32 messages:
[B.14] [B.18][C.16] _ SEC, under the Sarbanes-Oxley Act of 2002. [TCR FILES ATTACHED.].html
[B.14] [B.18][C.16] _ SEC, under the Sarbanes-Oxley Act of 2002. [TCR FILES ATTACHED.].html More specifically,
Rule 12d1-1 [17 CFR 270.12d1-1]
I also enjoy other CONFIRMATIONS of UNFAIR DEALINGS WHICH HAVE ALREADY been filed on behalf of MYSELF BY OTHERS.
AND IN THE UNFAIR AS SEEN IN THE NYSCEF DOCKETS ANNEXED AS EXHIBITS 153974/2020.
[2022.04.17 [Price Waterhouse Cooper][SEC][Sarbanes-Oxley Act of 2002].pdf]
CURRENTLY PRESENTS AN EVEN LARGER BODY OF RISK WHICH WAS DUE TO THE OMISSION, AND FAILURE TO DISCLOSE THE GROSS NEGLIGENCE
BY THE ZUCKER ORGANIZATION AND ITS COUNSELORS WHO WILL ACCEPT $2.00 AND ALSO WITHOUT ANY QUESTIONS ASKED WILL PAY TO AVOID TO THEIR OWN PROSECUTION AS SEEN IN THE HARASSMENT OF THE COURTS WITHOUT ANY "REASONABLE CAUSE".
*** THIS IS A FILED DOCUMENT WITH THE NY SUPREME COURT OF LAW - 1 PAGE ONLY - AND WITH REASONABLE CAUSE - AS SEEN IN THE FINE PRINT AND BELOW. *** https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=19MVPFXy0G0QvnmRLGpYIQ==
% JUST LIKE UNDER THE Investment Company Act File Number: 811-22538 FILER 1516523, and with a BRAND NEW SET OF KEYS- ENJOY THESE COURT DOCKETS WHILE YOU CROSS THEM AS WELL.... AFTER THEY JUST DETONATED THEIR PRIOR INVESTMENT ADVISER, ALL OF THEIR ORIGINAL DOCUMENTS HOWEVER AT THE EXPENSE OF AN AMOUNT GREATER THAN $900,000,000.00 IN THE PROCESS (AS OF CURRENT) SEEN IN THE LOSSES BY ITS INVESTORS.
to avoid THIS: https://violationtracker.goodjobsfirst.org/violation-tracker/tx-state-farm-lloyds
MAINTAIN THE SAME SUB-MANAGER, COMPLIANCE OFFICER, ADDRESS - AND PAID 912,500.00 TO CONDUCT THAT SMOOTH MANEUVER.
- ALL WHILE COMPARING APPLES TO APPLES, DID NOT WORK OUT IN THE INTEREST OF THEIR RESPONSIBILITY TO THEIR INVESTORS.
--- IS GROSS NEGLIGENCE - BY DEFINITION TO PULL A STUNT LIKE THIS - AND PROXY EVERYONE FOR:
$ 9000.00 OF ACCOUNTING,
$ 170000.00 IN POSTAGE,
$ 443000.00 IN DIRECTORS FEES, ETC. FOR A TOTAL COST OF 912,500.00
-- WILL PAY TO AVOID THE VIOLATION ABOVE --- BUT WON'T FILE AT ANY COST ("WHICH IS FREE") TO KEEP THEIR INVESTORS INFORMED.
INSTEAD: FILE THEMSELVES DOWN TO ZERO "UNCOVERED" AND without any material DISCLOSURE OF A LEGAL MATTER LOAN and without EXPLANATION or a FINE REGISTERED so the subscribers at $250.00 (two hundred and fifty dollar allotments) can make an INFORMED DECISION - is pathetic.
[Exchanges and data providers on Yahoo Finance Finance for Web Help - SLN2310.pdf]
NOT COVERED FOR OMISSIONS BY ITS DIRECT REPRESENTATIVES / PROMOTERS DURING THE PERIOD OF JANUARY 1, 2020 AND JANUARY 1, 2022.
- NOTWITHSTANDING ITS COUNSELORS, AUDITORS, CUSTODIANS, ADVISERS, AND BROKER DEALERS.
Fund Surviving the Merger - Advisers Investment Trust
"Investment Company Act File Number "811-22538"
"State Farm VP Management Corp. "000043036"
2021-11-30 2021-03-31
ASSETS: $10,164,850,238
PERFORMANCE: (-852,029,489.38)
##TICKER: BRK-B## HOW DO YOU LIKE THOSE APPLES INSTEAD?
https://www.sec.gov/Archives/edgar/data/0000093715/000119312521278180/d222043dn8f.htm
TOTAL AUM REPORT 2020: $ 8,294,447,250 ## TOTAL AUM NOV 30, 2021: $ 10,164,850,238
** Fund Surviving the Merger - Advisers Investment Trust ##
- MATERIALLY IS MISREPRESENTED HAVING BEEN FILED ON: April 23, 2021
- THE
** SEMI-ANNUAL HOLDING REPORT ** State Farm Associates' Funds Trust ** [N-CSRS] 0001193125-20-200810 ** REPORT DATE: 2020-05-31 ** FILING DATE: 2020-07-28 ** https://www.sec.gov/Archives/edgar/data/0000093715/000119312520200810/d913497dncsrs.htm
##https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=TxAa7cNVIHKtnJU/ni/zvg==
o State Farm Growth Fund $ 5,110,893,730
o State Farm Balanced Fund $ 2,034,992,761
o State Farm Interim Fund $ 433,682,403
o State Farm Municipal Bond Fund $ 714,878,356
_______________________________________________________
- but more specifically when I am certain [ hold the same number as RICKY HENDERSON - were FILED on the 24TH, of November ]
- others were filed on the November 13th, 2021, had they not been obstructed.
- would have prevented this LARGER EMBODIMENT.... thus OBSTRUCTION is not GOOD for THOSE who UNDERSTAND when I file a TCR is OFFICIAL.
## WHEN I am CERTAIN OF A CERTAIN DELUGE of INFORMATION is INEVITABLE
I do NOTIFY parties which in part is a failure on the PART OF THE CURRENT "PROMOTERS" to make these undisclosed / unregistered securities and make them available to the General PUBLIC
- and in the Central Registration Depository by the Representatives at State Farm VP. FIRM 43036...
SO HOW EXACTLY IS IT THESE MORALLY ARE DECISIONS BY THE "PROMOTERS" of a $10 BILLION DOLLAR "State Farm Associates' Funds Trust"
COMPLIANCE OFFICER & TREASURER ALSO CERTIFIED UNDER THESE DOCTRINES HERE.
- HAVE VIOLATED PWC IN THEIR CONCERTED EFFORTS OF OBSTRUCTION AND UNFAIR DEALINGS HAVE CREATED AN EVEN LARGER PROBLEM, WITH THE HELP OF THE ZUCKER FAMILY AND ITS COUNSELORS.
- REQUESTED AN ESTOPPEL: STATE FARM AND CEASE AND DESIST.
https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=s5WAeCnxmd/hcOI4eTnbig==
- REQUESTED AN ESTOPPEL: THE ZUCKER FAMILY & ITS COUNSELORS.
https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=Jf3Un/JaVXZwF7kvbaee4w==
SEE ALSO: EX72, AN UNKNOWN CORPORATION WHICH I NEVER MENTIONED, ANNEXED IN A LETTER BY COUNSELORS FOR PLAINTIFF (IN A FORMER CASE) WHEREBY THEY ALSO AVOIDED PROSECUTION BY USING UNFAIR PRACTICES, NOTWITHSTANDING SERVICE - AND A FAILURE ON THE FIRST PAGE AND DOCKET --- NOTWITHSTANDING THEIR "REQUEST FOR JUDICIARY INTERVENTION" WHICH WAS ADDRESSED TO MYSELF AND USING THE "BUILDING" AS THE ADDRESS, NO APARTMENT NUMBER NEEDED - BECAUSE THEY HAVE NO CERTIFICATE OF OCCUPANCY TO COLLECT RENT LEGALLY - HENCE THOSE ARE THE LEASES AND RENTS GUARANTEED TO STATE FARM - AND COULD HAVE BEEN MITIGATED WITHOUT AN UP-FRONT DELUGE BY PERMITTING MYSELF TO "CROSS" THEM IN A COURT OF LAW... INSTEAD AVOIDED A CHANGE IN CAPTION AS SEEN IN THE EXHIBIT ("WHICH WAS NOT PERMITTED") TO BE ENTERED -- DOCKET 420.
July 28, 2020
Pursuant to the requirements of the Securities Exchange Act of 1934 and the Investment Company Act of 1940, Sarbanes-Oxley Act of 2002.
[EX-99.CERT]
https://www.sec.gov/Archives/edgar/data/0000093715/000119312520200810/d913497dex99cert.htm
/s/ Joe R. Monk Jr., President.
/s/ Paul J. Smith, Senior Vice President and Treasurer.
[EX-99.906CERT]
https://www.sec.gov/Archives/edgar/data/0000093715/000119312520200810/d913497dex99906cert.htm
/s/ Joe R. Monk Jr., President.
/s/ Paul J. Smith, Senior Vice President and Treasurer.
NY DFS: DID ALSO USE A GROSS INCOME RATE, JUST LIKE 14.5 IN EARNINGS BY THE PROMOTER.
"Base Cap Rate: We used a capitalization rate of 7.200% which is Finance’s estimated of the rate of return that an ordinary investor would expect on their investment in this type of property."
101 WEST 55TH STREET, NEW YORK, NY, 10019
Label # 9469003699300000590947
Tracking Number: 9469003699300000590947
FOR INSTANCE - THIS IS SOMETHING STATE FILED WITH THE SECURITIES AND EXCHANGE COMMISSION HENCE is something that I did NOT FILE, however UPON DISCOVERY...
FILED as referenced --- beneath the lines below.
- HENCE FILED - by the State Farm -INTERPRETED IN THE WORDS OF MYSELF, AND ON BEHALF OF STATE FARM.
- I EVEN FILE THEIR ISSUES TO HELP THEM -------------X
##Loan 50074: ASSIGNMENT OF RENTS AND LEASES CRFN ID.: 2020052000291002
##PARTY 1: SULLIVAN PROPERTIES, L.P. C/O THE ZUCKER ORGANIZATION 101 WEST 55TH STREET, NEW YORK, NY, 10019
##PARTY 2: STATE FARM REALTY MORTGAGE, LLC ONE STATE FARM PLAZA, BLOOMINGTON IL, 60710
- WITH LITTLE TO NO PROBABILITY THAT "PRICE WATERHOUSE COOPERS" WILL CHARGE ANY PERSON OR COMPANY $9,000 IN ACCOUNTING FEES TO ACCOUNT FOR $170,000.00 OF PROXIES AND TABULATIONS FOR N-COUNT OF INVESTORS AND ISSUE A
Opinions on the Financial Statements
We have audited the accompanying statements of assets and liabilities, including the schedules of investments, of State Farm Growth Fund, State Farm Balanced Fund, State Farm Interim Fund, and State Farm Municipal Bond Fund (four of the funds constituting Advisers Investment Trust, hereafter collectively referred to as the “Funds”) as of September 30, 2021, the related statements of operations for the period December 1, 2020 through September 30, 2021 and the year ended November 30, 2020, the statements of changes in net assets for the period December 1, 2020 through September 30, 2021 and for each of the two years in the period ended November 30, 2020, including the related notes, and the financial highlights for each of the periods indicated therein (collectively referred to as the “financial statements”). In our opinion, the financial statements present fairly, in all material respects, the financial position of each of the Funds as of September 30, 2021, the results of each of their operations for the period December 1, 2020 through September 30, 2021 and the year ended November 30, 2020, the changes in each of their net assets for the period December 1, 2020 through September 30, 2021 and for each of the two years in the period ended November 30, 2020 and each of the financial highlights for each of the periods indicated therein in conformity with accounting principles generally accepted in the United States of America.
Basis for Opinions
These financial statements are the responsibility of the Funds’ management.
Our responsibility is to express an opinion on the Funds’ financial statements based on our audits.
We are a public accounting firm registered with the Public Company Accounting Oversight Board (United States) (PCAOB) and are required to be independent with respect to the Funds in accordance with the U.S. federal securities laws and the applicable rules and regulations of the Securities and Exchange Commission and the PCAOB.
We conducted our audits of these financial statements in accordance with the standards of the PCAOB. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement, whether due to error or fraud.
Our audits included performing procedures to assess the risks of material misstatement of the financial statements, whether due to error or fraud, and performing procedures that respond to those risks. Such procedures included examining, on a test basis, evidence regarding the amounts and disclosures in the financial statements. Our audits also included evaluating the accounting principles used and significant estimates made by management, as well as evaluating the overall presentation of the financial statements. Our procedures included confirmation of securities owned as of September 30, 2021 by correspondence with the custodian and brokers; when replies were not received from brokers, we performed other auditing procedures. We believe that our audits provide a reasonable basis for our opinions.
November 22, 2021
We have served as the auditor of one or more investment companies in Advisers Investment Trust since 2011.
PricewaterhouseCoopers LLP, One North Wacker, Chicago, IL 60606
T: (312) 298 2000, www.pwc.com/us
https://www.sec.gov/info/accountants/audit042707.htm
Item B.14. Provision of financial support.
Instruction. For purposes of this Item, a provision of financial support includes any (1) capital contribution, (2) purchase of a security from a Money Market Fund in reliance on rule 17a-9 under the Act (17 CFR 270.17a-9), (3) purchase of any defaulted or devalued security at fair value reasonably intended to increase or stabilize the value or liquidity of the Registrant's portfolio, (4) execution of letter of credit or letter of indemnity, (5) capital support agreement (whether or not the Registrant ultimately received support), (6) performance guarantee, or (7) other similar action reasonably intended to increase or stabilize the value or liquidity of the Registrant's portfolio. Provision of financial support does not include any (1) routine waiver of fees or reimbursement of Registrant's expenses, (2) routine inter-fund lending, (3) routine inter-fund purchases of Registrant's shares, or (4) action that would qualify as financial support as defined above, that the board of directors has otherwise determined not to be reasonably intended to increase or stabilize the value or liquidity of the Registrant's portfolio.
MS in FY 2020 AND FY 2021 as its PROMOTER, despite a CERTAIN "CONFLICT OF INTEREST" SOLD MORE SUBSCRIPTIONS...
CIK: 1516523 26 U.S. Code § [1] --- FOR EACH OF THE 6 BUILDINGS TOGETHER IS more than $50MM...
https://faxzero.com/status/30666994/5790f17018611119e07814be9e36110d164afaa6
--- I ALSO WILL FILE THEIR TAX EVASION PAPERS WITH CERTIFICATE OF OCCUPANCY
- AND USE THE NY SUPREME DOCKET THAT I FILED.
ALSO ANNEXED WITH THE NY DEPARTMENT OF BUILDINGS AND DURING THE PERIOD OF 40-17G IN THE FORMER.
HENCE WERE NOT AND ARE NOT COVERED FOR OMISSIONS IN THE NEWLY FILER: https://saaze2311prdsra.blob.core.windows.net/clean/e2fe82c1c6a2ec11b400002248316383/153974_2020_Sullivan_Properties_L_P_v_Baris_Dincer_EXHIBIT_S__231%20-%204%20august%202020%20-%20no%20certificate%20--%20see%20also%20bylaws%20and%20deadlines.pdf
https://github.com/BSCPGROUPHOLDINGSLLC/ELSER-AND-DICKER/pull/26 also Miss Daugherty remember - and as stage-4 cancer patient ----for them to pursue you in Court AFTER two years... please feel free to use ALL OF THESE PAPERS and use the word ANNEX as frequently as possible and at LET THEM TRY TO not ALLOW YOU TO CROSS THEM - PROPER..
power: Introduce OnePlus 3 fingerprintd thaw hack
Taken from Oneplus 3, this hack will make fingerprintd recover from suspend quickly.
Small fixes for newer kernels since we're coming from 3.10.108..
Change-Id: I0166e82d51a07439d15b41dbc03d7e751bfa783b Co-authored-by: Cyber Knight [email protected] [cyberknight777: forwardport and adapt to 4.14] Signed-off-by: Shreyansh Lodha [email protected] Signed-off-by: Pierre2324 [email protected] Signed-off-by: PainKiller3 [email protected] Signed-off-by: Dhruv [email protected] Signed-off-by: Cyber Knight [email protected] Signed-off-by: Carlos Jimenez (JavaShin-X) [email protected] Signed-off-by: Jebaitedneko [email protected]
[MANUAL MIRROR] The GAGening: Clothesmate edition [MDB IGNORE] (#15100)
-
The GAGening: Clothesmate edition
-
ThisShouldWork
-
hgnbhg
-
would probably help to have the right .dmi
-
fixed?
-
Fuck you
Co-authored-by: Twaticus [email protected]
[READY] [KC13] Showing "The Derelict" some love: General updates, aesthetic changes and misc (#67696)
With this PR I aim to make KC13 (TheDerelict.dmm), or Russian Station (whatever you guys call it) a tad bit more flavorful with its environment as well as somethings on the mapping side (like adding area icons!). To preface, no, I'm not remapping anything here extensively. The general layout should be relatively the same (or should be in theory).
Halfway through naming the area icons I checked the wiki page and found out it was KC not KS, so, its KS13 internally.
Readability for turf icons are cool. Also just making the ruin more eye appealing would be better. General cleanup and changes will give new life to this rather.. loved? Hated? Loot pinata? Ruin. The ruin also now starts completely depowered, like Old Station (its a Derelict, it makes no sense for it to still be powered after so long). As for some mild compensation, a few more batteries were sprinkled in to offset any issues. If there is any concern of "But they'll open the vault faster!", there were always 5 batteries that people used to make the vault SMES. Lastly, giving it some "visual story telling" is cool, as mapping fluff goes.
I also added a subtle OOC hint that the SMES in the northern most solar room needs a terminal with the following:
SMES Jumpscare As an aside, I aim to try and keep the feel of this ruin being "dated" while at the same time having some of our newer things. With that, certain things I'll opt out of using in favor of more "generic" structures to give KC13 that true "Its old but not really" feel and look.
one more time's the charm
well let's hope so damn fucking piece of shit
fix(mu4e): support mu 1.8
Thanks to some combination of ignorance and obstinance, mu4e has thrown compatibility to the wind and completely ignored the exitance of define-obsolete-function-alias. Coupled with the inconsistent/partial function renaming, this has made the mu4e 1.6⟶1.8 change particularly annoying to deal with.
By suffering the pain of doing the mu4e author's work for them, we can
use defalias to give backwards compatibility a good shot for about 60
functions. Some mu4ex functions are now mu4e--x, others are unchanged,
and then you've got a few odd changes like mu4eproc -> mu4e--server and
mu4e-search-rerun. The form of message :from entries has also changed,
and a new (mu4e) entrypoint added supplanting mu4e~start.
Fix: #6511 Close: #6549 Co-authored-by: Rahguzar [email protected]
[SPARK-39869][SQL][TESTS] Fix flaky hive - slow tests because of out-of-memory
This PR adds some manual System.gc
. I know enough that this doesn't guarantee the garbage collection and sounds somewhat funny but it works in my experience so far, and I did such hack in some places before.
To deflake the tests.
No, dev and test-only.
CI in this PR should test it out.
Closes #37291 from HyukjinKwon/SPARK-39869.
Authored-by: Hyukjin Kwon [email protected] Signed-off-by: Hyukjin Kwon [email protected]
Birthdays
#Most Chinese people describe themselves as diligent and thrifty. Some believe it's a Chinese value,
#Shem Young is N years old. For each birthday he receives a present. For each odd birthday
saved his money. Shem's naughty brother through the years gets 1.00 Dollar when he receives money as present.
Shem has sold his toys which he received over the years and each one for P Dollars and added the sum to the
amount of saved money. With the money he wanted to buy a washing machine for X Dollars to help his mother
with her household chores. Write a program that calculates how much money he has saved and if it is enough
Message against the war, in Russian and English
🇷🇺 Русским гражданам
Мы, участники Laminas, родились и живем в разных странах. У многих из нас есть друзья, родственники и коллеги как в России, так и в Украине. Некоторые из нас родились в России. Некоторые из нас живут в России. У некоторых бабушки и дедушки сражались с фашистами во Второй мировой войне. Здесь никто не поддерживает фашизм.
У одного из нас есть украинская родственница, которая спаслась из дома вместе с сыном. Поезд задержался из-за бомбежки на дороге впереди. У нас есть друзья, которые прячутся в бомбоубежищах. Мы с тревогой ждем весточки от них после воздушных налетов, которые беспорядочно наносят удары и попадают по больницам, школам, детским садам и домам. Мы не берем это из каких-либо СМИ. Мы наблюдаем это напрямую.
Вы доверяете нам достаточно, чтоб использовать наши программы, и мы просим вас довериться нам вновь. Мы нуждаемся в помощи. Выходите и протестуйте против этой бесполезной войны. Остановите кровопролитие. Скажите "Нет войне!"
🇺🇸 To Citizens of Russia
We at Laminas come from all over the world. Many of us have friends, family and colleagues in both Russia and Ukraine. Some of us were born in Russia. Some of us currently live in Russia. Some have grandparents who fought Nazis in World War II. Nobody here supports fascism.
One team member has a Ukrainian relative who fled her home with her son. The train was delayed due to bombing on the road ahead. We have friends who are hiding in bomb shelters. We anxiously follow up on them after the air raids, which indiscriminately fire at hospitals, schools, kindergartens and houses. Were not taking this from any media. These are our actual experiences.
You trust us enough to use our software. We ask that you trust us to say the truth on this. We need your help. Go out and protest this unnecessary war. Stop the bloodshed. Say "stop the war!"
Signed-off-by: Michał Bundyra [email protected]
Shit Pit Salvation
Bug fixes to Redwater plus a new clinic and moving the gallows, and the tribe got some love. Love you Oli <3
[clang] Implement ElaboratedType sugaring for types written bare
Without this patch, clang will not wrap in an ElaboratedType node types written without a keyword and nested name qualifier, which goes against the intent that we should produce an AST which retains enough details to recover how things are written.
The lack of this sugar is incompatible with the intent of the type printer default policy, which is to print types as written, but to fall back and print them fully qualified when they are desugared.
An ElaboratedTypeLoc without keyword / NNS uses no storage by itself, but still requires pointer alignment due to pre-existing bug in the TypeLoc buffer handling.
Troubleshooting list to deal with any breakage seen with this patch:
-
The most likely effect one would see by this patch is a change in how a type is printed. The type printer will, by design and default, print types as written. There are customization options there, but not that many, and they mainly apply to how to print a type that we somehow failed to track how it was written. This patch fixes a problem where we failed to distinguish between a type that was written without any elaborated-type qualifiers, such as a 'struct'/'class' tags and name spacifiers such as 'std::', and one that has been stripped of any 'metadata' that identifies such, the so called canonical types. Example:
namespace foo { struct A {}; A a; };
If one were to print the type of
foo::a
, prior to this patch, this would result infoo::A
. This is how the type printer would have, by default, printed the canonical type of A as well. As soon as you add any name qualifiers to A, the type printer would suddenly start accurately printing the type as written. This patch will make it print it accurately even when written without qualifiers, so we will just printA
for the initial example, as the user did not really write thatfoo::
namespace qualifier. -
This patch could expose a bug in some AST matcher. Matching types is harder to get right when there is sugar involved. For example, if you want to match a type against being a pointer to some type A, then you have to account for getting a type that is sugar for a pointer to A, or being a pointer to sugar to A, or both! Usually you would get the second part wrong, and this would work for a very simple test where you don't use any name qualifiers, but you would discover is broken when you do. The usual fix is to either use the matcher which strips sugar, which is annoying to use as for example if you match an N level pointer, you have to put N+1 such matchers in there, beginning to end and between all those levels. But in a lot of cases, if the property you want to match is present in the canonical type, it's easier and faster to just match on that... This goes with what is said in 1), if you want to match against the name of a type, and you want the name string to be something stable, perhaps matching on the name of the canonical type is the better choice.
-
This patch could exposed a bug in how you get the source range of some TypeLoc. For some reason, a lot of code is using getLocalSourceRange(), which only looks at the given TypeLoc node. This patch introduces a new, and more common TypeLoc node which contains no source locations on itself. This is not an inovation here, and some other, more rare TypeLoc nodes could also have this property, but if you use getLocalSourceRange on them, it's not going to return any valid locations, because it doesn't have any. The right fix here is to always use getSourceRange() or getBeginLoc/getEndLoc which will dive into the inner TypeLoc to get the source range if it doesn't find it on the top level one. You can use getLocalSourceRange if you are really into micro-optimizations and you have some outside knowledge that the TypeLocs you are dealing with will always include some source location.
-
Exposed a bug somewhere in the use of the normal clang type class API, where you have some type, you want to see if that type is some particular kind, you try a
dyn_cast
such asdyn_cast<TypedefType>
and that fails because now you have an ElaboratedType which has a TypeDefType inside of it, which is what you wanted to match. Again, like 2), this would usually have been tested poorly with some simple tests with no qualifications, and would have been broken had there been any other kind of type sugar, be it an ElaboratedType or a TemplateSpecializationType or a SubstTemplateParmType. The usual fix here is to usegetAs
instead ofdyn_cast
, which will look deeper into the type. Or usegetAsAdjusted
when dealing with TypeLocs. For some reason the API is inconsistent there and on TypeLocs getAs behaves like a dyn_cast. -
It could be a bug in this patch perhaps.
Let me know if you need any help!
Signed-off-by: Matheus Izvekov [email protected]
Differential Revision: https://reviews.llvm.org/D112374
lib/sort: make swap functions more generic
Patch series "lib/sort & lib/list_sort: faster and smaller", v2.
Because CONFIG_RETPOLINE has made indirect calls much more expensive, I thought I'd try to reduce the number made by the library sort functions.
The first three patches apply to lib/sort.c.
Patch #1 is a simple optimization. The built-in swap has special cases for aligned 4- and 8-byte objects. But those are almost never used; most calls to sort() work on larger structures, which fall back to the byte-at-a-time loop. This generalizes them to aligned multiples of 4 and 8 bytes. (If nothing else, it saves an awful lot of energy by not thrashing the store buffers as much.)
Patch #2 grabs a juicy piece of low-hanging fruit. I agree that nice simple solid heapsort is preferable to more complex algorithms (sorry, Andrey), but it's possible to implement heapsort with far fewer comparisons (50% asymptotically, 25-40% reduction for realistic sizes) than the way it's been done up to now. And with some care, the code ends up smaller, as well. This is the "big win" patch.
Patch #3 adds the same sort of indirect call bypass that has been added to the net code of late. The great majority of the callers use the builtin swap functions, so replace the indirect call to sort_func with a (highly preditable) series of if() statements. Rather surprisingly, this decreased code size, as the swap functions were inlined and their prologue & epilogue code eliminated.
lib/list_sort.c is a bit trickier, as merge sort is already close to optimal, and we don't want to introduce triumphs of theory over practicality like the Ford-Johnson merge-insertion sort.
Patch #4, without changing the algorithm, chops 32% off the code size and removes the part[MAX_LIST_LENGTH+1] pointer array (and the corresponding upper limit on efficiently sortable input size).
Patch #5 improves the algorithm. The previous code is already optimal for power-of-two (or slightly smaller) size inputs, but when the input size is just over a power of 2, there's a very unbalanced final merge.
There are, in the literature, several algorithms which solve this, but they all depend on the "breadth-first" merge order which was replaced by commit 835cc0c8477f with a more cache-friendly "depth-first" order. Some hard thinking came up with a depth-first algorithm which defers merges as little as possible while avoiding bad merges. This saves 0.2*n compares, averaged over all sizes.
The code size increase is minimal (64 bytes on x86-64, reducing the net savings to 26%), but the comments expanded significantly to document the clever algorithm.
TESTING NOTES: I have some ugly user-space benchmarking code which I used for testing before moving this code into the kernel. Shout if you want a copy.
I'm running this code right now, with CONFIG_TEST_SORT and CONFIG_TEST_LIST_SORT, but I confess I haven't rebooted since the last round of minor edits to quell checkpatch. I figure there will be at least one round of comments and final testing.
This patch (of 5):
Rather than having special-case swap functions for 4- and 8-byte objects, special-case aligned multiples of 4 or 8 bytes. This speeds up most users of sort() by avoiding fallback to the byte copy loop.
Despite what ca96ab859ab4 ("lib/sort: Add 64 bit swap function") claims, very few users of sort() sort pointers (or pointer-sized objects); most sort structures containing at least two words. (E.g. drivers/acpi/fan.c:acpi_fan_get_fps() sorts an array of 40-byte struct acpi_fan_fps.)
The functions also got renamed to reflect the fact that they support multiple words. In the great tradition of bikeshedding, the names were by far the most contentious issue during review of this patch series.
x86-64 code size 872 -> 886 bytes (+14)
With feedback from Andy Shevchenko, Rasmus Villemoes and Geert Uytterhoeven.
Link: http://lkml.kernel.org/r/f24f932df3a7fa1973c1084154f1cea596bcf341.1552704200.git.lkml@sdf.org Signed-off-by: George Spelvin [email protected] Acked-by: Andrey Abramov [email protected] Acked-by: Rasmus Villemoes [email protected] Reviewed-by: Andy Shevchenko [email protected] Cc: Rasmus Villemoes [email protected] Cc: Geert Uytterhoeven [email protected] Cc: Daniel Wagner [email protected] Cc: Don Mullis [email protected] Cc: Dave Chinner [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected] Signed-off-by: Yousef Algadri [email protected] Signed-off-by: Panchajanya1999 [email protected] Signed-off-by: Forenche [email protected] Signed-off-by: 7Soldier [email protected] Signed-off-by: neebe000 [email protected]
chore: allow other licenses through cookiecutter variable (#20)
- chore: add 'open_source_license' cookiecutter option
includes all licenses listed here: https://choosealicense.com/licenses/ , bottom-up (low permissive top, most permissive bottom) with MIT at first place because this is the License I chose to use because, as the main page says, 'I want it simple and permissive' - at least for now
- refactor: make LICENSE file text dynamic
texts taken from https://choosealicense.com/licenses/
holy shit these gpl texts are huge
-
docs: adjust cookiecutter docs (no longer assuming MIT)
-
chore: do not replace /... when its meant to literally be literal
-
refactor: set correct license and classifier in setup.cfg
-
style: remove jinja2 newlines, correctly fill in SPDX identifier of license in setup.cfg
Wow; what a pain. Fuck you, GitHub, but it works and displays nicely.
(code bounty) The tram is now unstoppably powerful. it cannot be stopped, it cannot be slowed, it cannot be reasoned with. YOU HAVE NO IDEA HOW READY YOU ARE (#66657)
ever see the tram take 10 milliseconds per movement to move 2100 objects? now you have https://user-images.githubusercontent.com/15794172/166198184-8bab93bd-f584-4269-9ed1-6aee746f8f3c.mp4 About The Pull Request
fixes #66887
done for the code bounty posted by @MMMiracles to optimize the tram so that it can be sped up. the tram is now twice as fast, firing every tick instead of every 2 ticks. and is now around 10x cheaper to move. also adds support for multiz trams, as in trams that span multiple z levels.
the tram on master takes around 10-15 milliseconds per movement with nothing on it other than its starting contents. why is this? because the tram is the canary in the coal mines when it comes to movement code, which is normally expensive as fuck. the tram does way more work than it needs to, and even finds new ways to slow the game down. I'll walk you through a few of the dumber things the tram currently does and how i fixed them.
the tram, at absolute minimum, has to move 55 separate industrial_lift platforms once per movement. this means that the tram has to unregister its entered/exited signals 55 times when "the tram" as a singular object is only entering 5 new turfs and exiting 5 old turfs every movement, this means that each of the 55 platforms calculates their own destination turfs and checks their contents every movement. The biggest single optimization in this pr was that I made the tram into a single 5x11 multitile object and made it only do entering/exiting checks on the 5 new and 5 old turfs in each movement.
way too many of the default tram contents are expensive to move for something that has to move a lot. fun fact, did you know that the walls on the tram have opacity? do you know what opacity does for movables? it makes them recalculate static lighting every time they move. did you know that the tram, this entire time, was taking JUST as much time spamming SSlighting updates as it was spending time in SStramprocess? well it is! now it doesnt do that, the walls are transparent. also, every window and every grille on the tram had the atmos_sensitive element applied to them which then added connect_loc to them, causing them to update signals every movement. that is also dumb and i got rid of that with snowflake overrides. Now we must take care to not add things that sneakily register to Moved() or the moved signal to the roundstart tram, because that is dumb, and the relative utility of simulating objects that should normally shatter due to heat and conduct heat from the atmosphere is far less than the cost of moving them, for this one object.
all tram contents physically Entered() and Exited() their destination and old turfs every movement, even though because they are on a tram they literally do not interact with the turf, the tram does. also, any objects that use connect_loc or connect_loc behalf that are on the same point on the tram also interact with each other because of this. now all contents of the tram act as if theyre being abstract_move()'d to their destination so that (almost) nothing thats in the destination turf or the exit turf can react to the event of "something laying on the tram is moving over you". the rare things that DO need to know what is physically entering or exiting their turf regardless of whether theyre interacting with the ground can register to the abstract entered and exited signals which are now always sent.
many of the things hooked into Moved(), whether it be overrides of Moved() itself, or handlers for the moved signal, add up to a LOT of processing time. especially for humans. now ive gotten rid of a lot of it, mostly for the tram but also for normal movement. i made footsteps (a significant portion of human movement cost) not do any work if the human themselves didnt do the movement. i optimized has_gravity() a fair amount, and then realized that since everything on the tram isnt changing momentum, i didnt actually need to check gravity for the purposes of drifting (newtonian_move() was taking a significant portion of the cost of movement at some points along the development process). so now it simply doesnt call newtonian_move() for movements that dont represent a change in momentum (by default all movements do).
also i put effort into 1. better organizing tram/lift code so that most of it is inside of a dedicated modules folder instead of scattered around 5 generic folders and 2. moved a lot of behavior from lift platforms themselves into their lift_master_datum since ideally the platforms would just handle moving themselves, while any behavior involving the entire lift such as "move to destination" and "blow up" would be handled by the lift_master_datum.
also https://user-images.githubusercontent.com/15794172/166220129-ff2ea344-442f-4e3e-94f0-ec58ab438563.mp4 multiz tram (this just adds the capability to map it like this, no tram does this) Actual Performance Differences
to benchmark this, i added a world.Profile(PROFILER_START) and world.Profile(PROFILER_START) to the tram moving, so that it generates a profiler output of all tram movement without any unrelated procs being recorded (except for world.Profile() overhead). this made it a lot easier to quantify what was slowing down both the tram and movement in general. and i did 3 types of tests on both master and my branch.
also i should note that i sped up the "master" tram test to move once per tick as well, simply because the normal movement speed seems unbearably slow now. so all recorded videos are done at twice the speed of the real tram on master. this doesnt affect the main thing i was trying to measure: cost for each movement.
the first test was the base tram, containing only my player mob and the movables starting on the tram roundstart. on master, this takes around 13 milliseconds or so on my computer (which is pretty close to what it takes on the servers), on this branch, it takes between 0.9-1.3 milliseconds.
ALSO in these benchmarks youll see that tram/proc/travel() will vary significantly between the master and optimized branches. this is 100% because there are 55 times more platforms moving on master compared to the master branch, and thus 55x more calls to this proc. every test was recorded with the exact same amount of distance moved
here are the master and optimized benchmark text files: master master base tram.txt https://user-images.githubusercontent.com/15794172/166210149-f118683d-6f6d-4dfb-b9e4-14f17b26aad8.mp4 also this shows the increased SSlighting usage resulting from the tram on master spamming updates, which doesnt happen on the optimized branch
optimized optimization base tram.txt https://user-images.githubusercontent.com/15794172/166206280-cd849aaa-ed3b-4e2f-b741-b8a5726091a9.mp4
the second test is meant to benchmark the best case scaling cost of moving objects, where nothing extra is registered to movement besides the bare minimum stuff on the /atom/movable level. Each of the open tiles of the tram had 1 bluespace rped filled with parts dumped onto it, to the point that the tram in total was moving 2100 objects. the vast majority of these objects did nothing special in movement so they serve as a good base case. only slightly off due to the rped's registering to movement.
on master, this test takes over 100 milliseconds per movement master 2000 obj's.txt https://user-images.githubusercontent.com/15794172/166210560-f4de620d-7dc6-4dbd-8b61-4a48149af707.mp4
when optimized, about 10 milliseconds per movement https://user-images.githubusercontent.com/15794172/166208654-bc10086b-bbfc-49fa-9987-d7558109cc1d.mp4 optimization 2000 obj's.txt
the third test is 300 humans spawned onto the tram, meant to test all the shit added on to movement cost for humans/carbons. in retrospect this test is actually way too biased in favor of my optimizations since the humans are all in only 3 tiles, so all 100 humans on a tile are reacting to the other 99 humans movements, which wouldnt be as bad if they were distributed across 20 tiles like in the second test. so dont read into this one too hard.
on master, this test takes 200 milliseconds master 300 catgirls.txt
when optimized, this takes about 13-14 milliseconds. optimization 300 catgirls on ram ranch.txt Why It's Good For The Game
the tram is literally 10x cheaper to move. and the code is better organized. currently on master the tram is as fast as running speed, meaning it has no real relative utility compared to just running the tracks (except for the added safety of not having to risk being ran over by the tram). now the tram of which we have an entire map based around can be used to its full potential.
also, has some fixes to things on the tram reacting to movement. for example on master if you are standing on a tram tile that contains a banana and the TRAM moves, you will slip if the banana was in that spot before you (not if you were there first however). this is because the banana has no concept of relative movement, you and it are in the same reference frame but the banana, which failed highschool physics, believes you to have moved onto it and thus subjected you to the humiliation of an unjust slipping. now since tram contents that dont register to abstract entered/exited cannot know about other tram contents on the same tile during a movement, this cannot happen.
also, you no longer make footstep sounds when the tram moves you over a floor TODO
mainly opened it now so i can create a stopping point and attend to my other now staling prs, we're at a state of functionality far enough to start testmerging it anyways.
add a better way for admins to be notified of the tram overloading the server if someone purposefully stuffs it with as much shit as they can, and for admins to clear said shit. automatically slow down the tram if SStramprocess takes over like, 10 milliseconds complete. the tram still cant really check tick and yield without introducing logic holes, so making sure it doesnt take half of the tick every tick is important go over my code to catch dumb shit i forgot about, there always is for these kinds of refactors because im very messy remove the area based forced_gravity optimization its not worth figuring out why it doesnt work fix the inevitable merge conflict with master lol create an icon for the tram_tunnel area type i made so that objects on the tram dont have to enter and exit areas twice in a cross-station traversal
add an easy way to vv tram lethality for mobs/things being hit by it. its an easy target in another thing i already wanted to do: a reinforced concept of shared variables from any particular tram platform and the entire tram itself. admins should be able to slow down the tram by vv'ing one platform and have it apply to the entire tram for example.
Changelog
cl balance: the tram is now twice as fast, pray it doesnt get any faster (it cant without raising world fps) performance: the tram is now about 10 times cheaper to move for the server add: mappers can now create trams with multiple z levels code: industrial_lift's now have more of their behavior pertaining to "the entire lift" being handled by their lift_master_datum as opposed to belonging to a random platform on the lift. /cl
User Puzzles, Messy Files Feature, Landing Page, and Session
- Its been a while since I commit and really, I don't want to do it because of my lovely disgusting file structures that I can't fix because of a fear called You Gonna Mess Everything
- Tried to learn folder structure in the web and tried this for the backend, well, frontend is still stinky as it is but Im gonna manage it later(hopefully)
- So, I finally fixed the user puzzles route, YEY. You can also add images now in the database that's why testing is now interesting and intrigiung(without accidentally logging the buffer and medusa'd my PC for 20 mins
- Speaking of buffer, other than using AWS cloud storage which is not good for a broke 17 years old student like me, or using grid-fs from mongodb which Im too lazy to learn about, I tried using base 64 to buffer which is really easy to use. Unfortunately, its only good for static media like pictures and best applied only for small images. This is really inefficient for a picture-based website?app? and it would widely reduced the quality of your ex-girlfriend's favorite sexy pose when you add it to my website.
- Its messy but yeh, I will try to fixed for the next commit
- I have a plan to use the DDD design pattern to the backend but I will have that as a feauture
- I will try to integrate Jest and Typescript since I just recently learned it last week
- I will also change the naming convention for React folders since camelCase looks ugly
- Dont mind the use of controllers and routes for now, I will fix it promise
- Next: Game Options, The literal game, the searchbar features and Profiles
- I dont need to worry about design for now, functionality first
- This is the most longest commit Ive ever wrote See you next week
Next Link
So, Next has a Link component that works kind of like React Router Dom v6.
import Link from 'next/link'
whatever string goesThe major change when using Link instead of just anchor tags is that it no longer full reloads the page.
If you just use anchor tags, you'll see that it reloads the page (actually, it full reloads, even the dev tools). That is ... bad, full reloads must be avoided in my opinion.
Link makes it that Next prefetch the component to be rendered in a Link component. So, in this app, at this stage of development, we only have 2 pages: index.js and about.jsx Each page has a Link to route to either / or /about
Lets say we are at mydomain.com/ (index.js) Because we are using Link, when the file index.js is rendered, Next knows it has a Link component that goes to /about, so it prefetches that about page in memory.
The result: a similar behavior as an SPA.
Might be a bit slow if there's way to much Links, honestly idk yet.
Ok my brother last one you got this i think it has really dumped (no explicit language sorry GitHub) up we will see or hold on but yeah just fix the issue with that rows and rowcounter classes are:
SummaryTableFactory SummarySheetFactory SheetBuilder_Summary SummaryScope TableSectionBuilder_Summary TableBuilder_Summary
Yeah and also remember my brother no sleeping before your done your not finished when you get tired your finished when your done when you executed
And also write Rahel the angel deserves it And lastly brush the bloody hell (sorry GitHub) out of your magnificent (i got it GitHub) teeth
U gone change it i know you got this keep your head up failure will not get me down
I hope you had a nice weekend and have a nice monday my brother u got this u a demon
Today we did some draggi droppi stuff leeetts goo yeah you have to go on there tomorrow so have a great day mister and remember no going to bed before everything is done brush your teeth and write rahel a good night text she deserves it
u got this ma g
Makes Crisis robot actually worth using. (#576)
- Adds adrenaline to paramedic borg hypospray
Kinda weird how the robot whos meant to be doing paramedic shit doesnt have shit to restart the heart or apply allergic reaction first aid??? Adds /datum/reagent/medicine/adrenaline to the crisis borghypo.
- Adds ATK, ABK to Crisis Cyborg
Adds the advanced trauma and burn kits to the Crisis cyborg. This makes it a direct tradeoff between Crisis and Emergency Response flying. ER has more mobility, but worse gear for medical treatment, while the Crisis cyborg has less mobility but better gear.
- Adds tylenol and dexalin to crisis borg
Why tf does the surgical borg, specializing in surgical procedures, have better equipment for the medical doctor job than the actual medical cyborg? Tradeoff between Emergency Response and Crisis: Crisis has lower mobility, but better gear, and Emergency Response has higher mobility but worse gear.
Eta-10 sprite fix, some smaller stuff too (#144)
-
balance?
-
add reactor startup manual, fix sprites for lockers of all kinds
-
MTF See No Evil (not being used for now), no more alien MTF
-
map fix, janitor uniform
-
Eta-10 helmet works!
-
fixed
-
SM Crystal hotfix
-
Remove supermatter, for real
-
No more blood sucking artifact, sprite fix for Eta-10
"8:30am. So horrible at this. Yesterday I was obsessed about the Neurolabs interview, and now it is the General Intelligence. The place sounds really appealing to me. If I could get a chance I'll want to try impressing them upon my main idea.
Last night, I feel if it were anywhere else, the interview would have bombed, but I get the sense they are looking for a grade A autist, so (incoherent) rants are right up their alley. I can't even remember what I was talking about as I was running on fumes at that point in the day. Let me try checking the mail. With Kalepa as my sample, I know a reject could come out of the blue and at any time despite the fact the F2F interview finished on a positive note.
8:35am. No reject, but Nicole forgot to send me the Avalon link. Did a reply asking her about it. They have a little procedurally generated world for RL research, low poly 3d. She was actually interested when I mentioned my 3d experience.
The way things are going, it seems the Neurolabs guy has me on ignore, he is probably looking for somebody cheaper and does not want to waste time with me. The AssemlyAI HR drone seems to have dropped out and I am yet to hear from her about the intro interview. I do not think I got a reject there yet. If it was really a reject, they wouldn't waste time like this, and just tell it to me so directly.
8:50am. I know that interviewing should not be this strenuous, but I need a vacation so I can at least get a single proper night of sleep.
I am just no good at this. I know that I should not be hoping that I end up in any particular place. Job hunting is resigning yourself to being a ball on the roulette table, and yet I am getting emotionally involved in each and every opportunity.
9am. But it is fine if I am no good. I'll put in the effort anyway. What I want to do is write a research proposal.
9:40am. I have no idea how the last 40m went. I am in a daze.
9:45am. https://docs.godotengine.org/en/stable/getting_started/first_2d_game/03.coding_the_player.html
They mention they used Godot for Avalon.
10:25am. I am in a daze right now. Here is the plan for today: I am going to relax for a while and then start work on the research proposal.
10:35am. If the GI application fails, so be it. I'll get a job at some vanilla place at some point. But since the thread is alive, I will ignore anything else and do my best to follow it in my own way. Every application has its own dynamic. A research place like GI different from a commercial startup like NeuroLabs, or the ML library position at AssemblyAI. Positions that don't have an uniqueness to them aren't valuable to me and I'll be able to approach them more neutrally. Having a mercenary mindset and emulating it are two different things. Being able to talk about my passion like at place like this heats me up.
https://www.youtube.com/watch?v=t1O7LpOTBfM Jade Moon Upon a Sea of Clouds - Disc 1: Glazed Moon Over the Tides|Genshin Impact
This is such a great OST. It is good relaxation music.
10:45am. Let me do something like watch Overlord and have breakfast. Actually, let me do the chores here before that.
10:35am. Taken care of the chores. Now I can have breakfast.
12:25pm. https://bakapervert.wordpress.com/heroine-survival-vol-4-chapter-9/
Let me finish reading this and I will get started on the research proposal.
1:10pm. Let me start. I'll want to do some writing. Let me get this done and I will clear my head over the weekend. If I get a reject, I'll take a break from the interviews for a few days and apply to some other places.
///
We are in the midst of a hardware renaissance and there are more chip startups than ever, while at the same time machine learning research hasn't progressed beyond backpropagation. Thousands of papers a year, each following the same research path of trying out new tricks with existing deep learning libraries. Experts understand that learning using the backpropagation algorithm has limitations - nothing in the framework can account for how to do long term credit assignment or continuous learning like the brain does. And it is easy to construct examples where gradient based optimization makes little sense. There should be something beyond it. So far, deep learning has swept away the objections of its detractors using practical success, but its successes as they are have come from using bigger, better and faster computers, rather than revolutionary algorithmic advancements. Attempts are being made, but nobody has discovered a principle to serve as the basis for engineering the next generation of learning systems.
I myself have tried to move beyond backpropagation and failed. It is strange that nobody else has managed it either, but at some point one has to accept that the problem is intrinsically difficult to reason about for humans. Moreover, not being able to do it is humiliating. Intelligence is something we are always surrounded by and see examples of all the time, and yet we cannot grasp it.
Maybe it would be best to just ask the machines how to do it? With the right setup and sufficiently powerful hardware, the hardware itself should be able to tell us how to best make use of it.
- Implement a game environment on an AI chip
- Implement a genetic programming system on an AI chip.
- Use it to infer a learning algorithm through game play.
The hope is that the learning algorithm would be a novel discovery we could study and think about, and mine for insights into the true principles of learning. The way ML research is currently done is inefficient. It relies very heavily on the ingenuity and imagination of the researcher itself who has to take great care in testing out a single one of his ideas at a time. It would be better to automate that in imitation of the ultimate scientist that is nature.
Genetic programming and evolutionary algorithms aren't a new thing by now and haven't produced particularly notable results. But then again, even in the early 20th century, there were Turing machines and lambda calculus, so programming existed - you could do it on a piece of paper. But it was not until decades later than the dawn of computers came and programming became a profession. I believe that if I had an oracle and asked it for an optimal algorithm for some game given realistic hardware constraints, the results would be quite enlightening. It certainly would not be backprop based.
The way to think about them is that they are parallel computing devices, with many cores which communicate using message passing and only have access to their own local memory. They are not like GPUs which have a different parallelism model, but more like a cluster of instances on the cloud. Each instance has its own resources and cannot read the memory of another directly. Instead it has to communicate on the network. This kind of computing model will arise in AI chips and pave the way to the next generation of computing devices. Whereas CPUs' frequency scaling and now core count is stuck, these kinds of devices are the true spiritual successors to the CPUs of old, carrying on the will of Moore's Law.
Turing completeness being what it is, you could do what I propose on the cloud even without the use of exotic hardware, but it would be prohibitively expensive. Instead having the cloud right in your desktop rig is what would allow the costs of pursuing this path of research viable. I am being metaphoric. While in the future these chips will be available for purchase even to regular individuals for their PCs, the most likely avenue of research is to rent them over cloud due to the need to scale.
Due to their computing model, GPUs aren't good when the task requires its individual cores to communicate. It would be difficult to implement an interpreter and an AST instancer and mutator on it. Furthermore, it would be difficult to implement a game directly on it for the same reason.
This is the dominant paradigm when doing RL research, but has significant efficiency issues. The communication overhead between the devices is harmful to performance. Even worse, unlike the brain, GPUs are poor at processing singular inputs. In supervised learning where you have a static dataset, this is less of an issue, but piping a simulator to a learning system hosted on the GPU is painful.
With the right hardware, all those problems will go away.
Given the benefits they could bring to their users, the essence of my proposal is that serious ML researchers should keep the hardware coming down the pipeline in mind, and even start preparing for them. It is an adventure as a new domain gets discovered. New hardware will bring with it new research and programming opportunities.
In backpropagation, you could consider the individual parameters to be competing against each other. But it is not like the layers themselves have the freedom to change the way they learn as the training progresses. Instead of considering a system made out of layers like in deep learning, why not have each of the AI chip cores have its own individual programming that evolves according to the outcomes of gameplay? The static part of each core would be the genetic programming system evolving the AST defining its behavior, and an interpreter executing such an AST. Each core would have the ability of suppressing or accepting the messaging of others, in essence defining its own interaction protocol with the others. The hope is that through training, the individual units would converge to proper and efficient learning algorithms and also that efficient inter-core communication patterns would arise naturally through the training process.
I haven't done a full literature search on whether this has been done. Evolving learning algorithms has certainly been done. For example: AutoML-Zero: Evolving Machine Learning Algorithms From Scratch. Like in that paper, the research I've seen fundamentally tries to evolve a specific learning algorithm that can then be run on a single GPU and on a static dataset. Backprop seems to be good on static datasets and supervised learning tasks, so the resulting algorithms are just hacky variations on it rather than revolutionary gusts of insight the researcher would hope.
It could be done on the cloud, but you'd probably want 10k independent instances to even begin such research. Very few could afford to rent such computational power. But AI chips with 10k cores will not be rare. Doing this research proposal on current CPU/GPU systems would be quite difficult and it would not be natural by any means. You'd have to contort the system to make the fit. With AI chips, the research proposal pretty much falls out from the architecture of them.
The proposal would work given sufficient computing power. Maybe a single chip of 10k cores would not be enough to make a good game playing agent or evolve particularly interesting algorithms, but with enough of it the proposal would certainly succeed. Right now, finding the next generations of learning algorithms is a big deal, but in the future the hardware will be powerful enough that anybody would be able to derive them from scratch. Intelligence of a system is so vital to its survival, so there is something about the structure of the universe itself that supports its evolution and development. Developing learning algorithms might be hard to do wrong, but might be easy to do right even though it seems simply hard all around to us now.
The big question is just how much computational power would be required to evolve revolutionary learning algorithms using a genetic programming system. Right we could spend 10k dollars on researching this, maybe if it fails go to 100k. But after that do we go to 1m, 10m or 100m just throwing money into a furnace?
I suggest that this proposal simply be kept in mind and repeated every few years on a small scale. Maybe today the project could fail, but in a few years the hardware will be better and could be tried more cheaply. Maybe in the future there will be material breakthroughs in memristors and we will have access to devices with terabytes of non-volatile memory on a single chip. A 10k of computation in 2032 might go for 1m today which would make success a lot more likely. Of course, by that point, maybe some genius will reason out the way learning works without relying on tools to do it, or neuroscientists will reveal the secrets of the brain. I'd prefer the insights came as soon possible regardless of the path taken to get to the source, but I'd prefer to be proactive. There is no need to trust or rely on the ML community at large to deliver insights and algorithms given its track record. There is no need to look for wisdom in the landfill of papers it produces annually.
This project would be difficult to an average ML researcher due to inadequate tooling. They might be versed in Python, how will they deal with programming a GP system on a novel computing device with sparse libraries and access to a low level programming language such as C? To me such a situation would be easy. I have my own programming language Spiral. Two weeks ago I did a ref counted C backend in anticipation of pursuing this line of research on my own. It will serve as a prototype that I can easily adapt to any kind of device that I want to program. It would take a few days to create a backend, but after that I would be able to program them in a high level functional programming language.
I've never made a genetic programming system, but I have significant PL experience and creating an interpreter is a cakewalk for me. Once I get access to a device suitable for implementing it, I'll have to go over the literature and study the papers from the field, but I am well versed in implementing ML algorithms by now. In the first place, the reason Spiral was created is specifically to implement ML libraries on esoteric devices. The motivation for creating is was the difficulty of creating a GPU-based library deep learning library in F#. I wanted a language that is high level and comfortable to program in, while at the same time being highly efficient for the sake of GPU kernel compilation as well as capable of easy interop between separate language backends. I succeeded in that goal, and today have something suitable to use on AI chips.
I simply just need access to such chips and the funds to support such research.
Last year I did some research on companies in the AI chip field. There are dozens of startups, furthermore many bigs companies are making their own accelerator, but only 3 companies made an impression on me. Many startups focus on just inference for example and are insufficiently programmable to support a wide range of functionality, and/or their hardware is photonic or quantum and still in the labs so it is not something I'd be able to get my hands on any time soon. After going through the list, I pared it down to 3: TensTorrent, Groq and Graphcore.
Out of the 3, Tenstorrent has the strongest vision, and based on hearsay the best software stack.
Not all is rosy for them in their competition with Nvidia.
On the benchmarks they lead, their advantage is disappointing, and for the rest of the matches they don't even bother showing up. The 3 companies are being bodied by Nvidia's software muscle. But this does not matter to me as I am not interested in training large convolutional or transformer models in supervised learning regimes. Instead I want to investigate how suitable those chips are making simulators boosted by ML. There might be significant business opportunities in this, and personally I find this line work more interesting that the way ML is currently being done.
Last year I tried applying to Intel. The job was for a PL related position. The division at Intel was a company recently acquired and its business was making routers. The problem they have is that the router has limited memory, in fact it only had registers which needed to be completely pre--allocated at compile time. There is no leeway to make use of other kinds of memory if the registers were insufficient to hold the data. In other words, if the compiler could not assign enough registers at compile time, the compilation would fail even if otherwise the program was valid. Register assignment requires solving a graph coloring problem which in NP Hard. I found that interesting because I had taken a Discrete Optimization course back in 2015 and knew good ways of doing it. This story ended with the recruiter telling me that:
- They don't want a language like Spiral at Intel.
- Haven't exhausted the possibilities for deterministic compilation and don't want to consider randomized algorithms.
- Aren't interested in compilation on non-x64 chips.
I didn't get the job, but that episode made me aware that simulators boosted by NNs could have a stronger impact than is expected. Later, I studied 3d, and saw research on raytracing boosted by NNs. On the same channel there are examples of physics simulations being accelerated by NNs.
The point is, there are places where DL could be useful that you wouldn't think of. It is not supervised learning, but it is not reinforcement learning either. It is something in between. It is an opportunity that CPU + GPU combos are poorly suited to handle, but AI chips will find natural fit. This is even without significant algorithmic advances.
///
4:10pm. Let me have lunch here.
4:30pm. Done with lunch. I've finished writing the review and am just running it through Google Docs.
4:45pm. Done checking it. Let me convert it to pdf.
https://www.markdowntopdf.com/
Oh, the result of this is actually quite nice. The document looks quite fancy.
4:50pm. I guess I'll send it to GI. It will serve as something to talk about on the next stage.
Thank you for taking the time to chat with me yesterday. We were able to sync up as a team, and after much consideration, we've made the decision to not move forward at this time. That being said, we’d like to keep your resume on file as our team continues to grow.
Those guys sure are in a hurry to reject me.
You guys sure are in a hurry to decide. Today I spent some time crafting a research proposal and here it is. Would it change your mind?
Anyway, I lose nothing from sending this.
5pm. But it is hard to change one's mind once they've decided on something. They get billions of resumes a day and it seems they have fallen into the trap of looking for an unicorn. Otherwise they would not be a 10-man team after so long.
5:05pm. Remote applications suck. Maybe I should look for more vanilla work next?
Honestly, it is really hard for me to make a positive impression in these face to face interviews. I am nervous during it. Maybe it will be a while until I get my first job. It really is hard aiming high. But I need to keep at it.
Ok, so Kalepa is down, Neurolabs is ignoring me, AssemblyAI rescheduled indefinitely, and GI has moved on.
5:10pm. Next week I will get ready for the next round of applications. Maybe I should aim a bit lower in the 100-150k range and look for a 200k opportunity while I am on the job?
I am just too inexperienced at this and 9:30pm is not the right time for me to be interviewing.
5:15pm. Anyway, not counting Z and Viktor's interview from last year, this is the first batch since I started this.
While that is going on in the background, what I am going to do is check out Groq's and Graphcore's cloud offerings if they are any. Interviews going badly is one thing, but why is TensTorrent's cloud service ignoring my message?
5:15pm. Right. Tomorrow I am going to do that while in the background I study AWS. It seems that to get these highly paid jobs, I need a firm psychological mastery during interviewing. That can only come with experience.
5:25pm. Despite my conditioning, during this initial stage, every interview feels meaningful. I think after a few dozen attempts I'll stop giving a shit and be able to do it properly. I need to work towards getting such a mindset.
Also, based on my research proposal, it seems all I need to cause the Singularity it to implement such a system. It feels like I've been moping around far too long. Deep down even if I try to put up a cheery exterior, my spirit was really rattled by my failure to win at poker.
But who cares about poker?
Compared to poker, wouldn't finding a greater learning algorithm be a much more remarkable accomplishment. Once I have it, not just poker, but any game would be ripe for the picking.
Instead of assuming I need very future hardware, maybe some insights could be unearthed now?
5:35pm. A part of me is thinking: "Will the Singularity even happen at this rate?"
But maybe what I should be thinking is: "Is causing the Singularity that big of a deal?"
I mean, I did write out the way to find the learning algorithm. Why would it fail?
The reason why I've been wallowing in my own filth for the past year is because I've compromised on my vision. A real programmer would have just inferred the learning algorithm if he did not have it. Of course I am not going to accomplish anything with backprop. I was destined to fail following that path from the start.
I lacked Nature's Eyes.
5:40pm. Yeah, it is humiliating. Not knowing how to make the machines learn, being like some ball on a roulette wheel, acting like a NPC by taking up art.
5:50pm. Siiiggghhhh...this is not how I want to live. It was wrong.
The path not taken was the right one. If I had attempted poker from an evolutionary perspective by evolving programs, sure, I would be blindsided once the deep learning craze started. But with that approach, what would have happened is that I would have realized the A question for the machine
proposal a lot earlier.
5:55pm. Worrying about job paying 50 or 150k or things like that. I do not want to live like that. I should live with the desire to grasp everything. Even poker was just a compromise because I thought I could not get to the real thing.
If the reasoning in the proposal is good, it will give me the right learning algorithm. Once I have such an algorithm, it will give me a lot of power to do what I want.
6:15pm. The Singularity is an arms race. If the proposal I've made is not enough to grasp the algorithm then maybe it is really impossible for an individual to compete. Then life was meaningless to begin with so I do not have to feel bad about losing either way. But if solipsism is destiny, then there will be a path to victory regardless of resources or talent. To begin with, the only way for an individual to win is if the universe is fake. This universe being real would mean I am no greater than some bug on the pavement.
I should start prototyping this proposal in preparation for when I get the AI chip. I could immitate it a bit on the cloud even if I can't run it for long. I should embrace the cloud and feel the potential power it gives me. Then I should get access to one of those chips on their own cloud. I should be able to get Graphcore at least. I need to try programming it. Once I do I will know where things stand.
6:20pm. Right now, let me just chill.
6:25pm. In the past year, I've tasted weakness, but I should find my pride and prove that the appex programming skill is not such a weak thing. I should hold this desire close to my heart and internalize it."
Merge #84286 #84450 #84783 #85146 #85199 #85259 #85281 #85284
84286: streamingccl: heartbeat persisted frontier r=samiskin a=samiskin
Resolves #84086
Previously we would forward the ingestion frontier's resolved timestamp at the point of heartbeat, however this would result in the chance of the protected timestamp record of the producer to exceed that of the ingestion job's persisted frontier.
This PR uses the last persisted frontier value instead.
Release note (bug fix): The protected timestamp of the producer job is no longer able to exceed the persisted ingestion frontier.
84450: sql,ccl: secondary regions base implementation r=ajwerner a=e-mbrown
Refs #68358
Previously, when the primary region failed the leaseholders would go to a random region. Secondary regions would provide a failover region that leaseholders will fail into if the primary region is down. When a secondary region is set, a secondary lease preference is added to tables and partitions. This should ensure two voters are moved into the secondary region
Release justification: Release note (sql change): CockroachDB now supports secondary regions. Secondary regions makes it possible to specify a failover region, which will receive the leaseholder if the primary region fails.
84783: sql: add CREATE … NOT VISIBLE to parser r=wenyihu6 a=wenyihu6
This commit adds parsing support for CREATE INDEX … NOT VISIBLE
and CREATE TABLE (... INDEX() NOT VISIBLE)
.
This commit does not add any logic to the optimizer, and executing it returns an “unimplemented” error immediately.
Assists: cockroachdb/cockroach#72576
See also: cockroachdb/cockroach#84912
Release note (sql change): Parser now supports creating an index with the option to mark them as invisible. But no implementation has done yet, and executing it returns an “unimplemented” error immediately.
85146: [backupccl] Use Expr for backup's Detached and Revision History options r=benbardin a=benbardin
This will allow us to set them to null, which will be helpful for ALTER commands.
Release note: None
85199: sql: move function properties to overload level r=chengxiong-ruan a=chengxiong-ruan
There're 3 commits: (1) sql: use Overload in FunctionDefinition instead of overloadImpl
It feels a bit annoying to always need to cast overloadImpl
to Overload. In fact, all overloadImpl
s in FunctionDefinition
are Overload
s
(2) sql: move function properties to overload level
Moving function propterties to overload level, but still keep it in FunctionDefiniton since it's still needed for internal usage like docgen. Later on, the function resolution interface will return a resolved version of function definition which won't have the properties field. Instead, we'll go through overloads to fetch properties for resolved function definition. Current FunctionDefiniton will be kept for builtin functions only.
(3) sql: remove usage of GetBuiltinProperties from GetSequenceFromFunc
In GetSequenceFromFunc
we resolve a functiong and then use the
name to get builtin properties. This seems unnecessary because
we may just get function properties from the resolved function
definition. With this change, we are eleminating a good amount of
usage cases of "GetBuiltinProperties", which seems to be the
majority of the usages of "GetBuiltinProperties".
There shouldn't be any functionality changes since we don't have UDFs yet.
85259: roachtest: fix cdc/schemareg r=ajwerner a=ajwerner
Now that DROP COLUMN uses the declarative schema changer, you no longer see touch writes.
Fixes #84789
Release note: None
85281: backupccl: show backup was incorrectly capturing a ctx r=yuzefovich a=adityamaru
This diff fixes a span use after finish that was a result of show backup code incorrectly capturing a context.
Fixes: #85201
Release note: None
85284: dev: fix test --changed
when no files are changed r=rail a=rickystewart
With the previous version of the code this would fail with a confusing error message if no files were changed.
Release note: None
Co-authored-by: Shiranka Miskin [email protected] Co-authored-by: richardjcai [email protected] Co-authored-by: wenyihu3 [email protected] Co-authored-by: Ben Bardin [email protected] Co-authored-by: Chengxiong Ruan [email protected] Co-authored-by: Andrew Werner [email protected] Co-authored-by: Aditya Maru [email protected] Co-authored-by: Ricky Stewart [email protected]
random: implement getrandom() in vDSO
Two statements:
-
Userspace wants faster cryptographically secure random numbers of arbitrary size, big or small.
-
Userspace is currently unable to safely roll its own RNG with the same security profile as getrandom().
Statement (1) has been debated for years, with arguments ranging from "we need faster cryptographically secure card shuffling!" to "the only things that actually need good randomness are keys, which are few and far between" to "actually, TLS CBC nonces are frequent" and so on. I don't intend to wade into that debate substantially, except to note that recently glibc added arc4random(), whose goal is to return a cryptographically secure uint32_t. So here we are.
Statement (2) is more interesting. The kernel is the nexus of all entropic inputs that influence the RNG. It is in the best position, and probably the only position, to decide anything at all about the current state of the RNG and of its entropy. One of the things it uniquely knows about is when reseeding is necessary.
For example, when a virtual machine is forked, restored, or duplicated, it's imparative that the RNG doesn't generate the same outputs. For this reason, there's a small protocol between hypervisors and the kernel that indicates this has happened, alongside some ID, which the RNG uses to immediately reseed, so as not to return the same numbers. Were userspace to expand a getrandom() seed from time T1 for the next hour, and at some point T2 < hour, the virtual machine forked, userspace would continue to provide the same numbers to two (or more) different virtual machines, resulting in potential cryptographic catastrophe. Something similar happens on resuming from hibernation (or even suspend), with various compromise scenarios there in mind.
There's a more general reason why userspace rolling its own RNG from a getrandom() seed is fraught. There's a lot of attention paid to this particular Linuxism we have of the RNG being initialized and thus non-blocking or uninitialized and thus blocking until it is initialized. These are our Two Big States that many hold to be the holy differentiating factor between safe and not safe, between cryptographically secure and garbage. The fact is, however, that the distinction between these two states is a hand-wavy wishy-washy inexact approximation. Outside of a few exceptional cases (e.g. a HW RNG is available), we actually don't really ever know with any rigor at all when the RNG is safe and ready (nor when it's compromised). We do the best we can to "estimate" it, but entropy estimation is fundamentally impossible in the general case. So really, we're just doing guess work, and hoping it's good and conservative enough. Let's then assume that there's always some potential error involved in this differentiator.
In fact, under the surface, the RNG is engineered around a different principal, and that is trying to use new entropic inputs regularly and at the right specific moments in time. For example, close to boot time, the RNG reseeds itself more often than later. At certain events, like VM fork, the RNG reseeds itself immediately. The various heuristics for when the RNG will use new entropy and how often is really a core aspect of what the RNG has some potential to do decently enough (and something that will probably continue to improve in the future from random.c's present set of algorithms). So in your mind, put away the metal attachment to the Two Big States, which represent an approximation with a potential margin of error. Instead keep in mind that the RNG's primary operating heuristic is how often and exactly when it's going to reseed.
So, if userspace takes a seed from getrandom() at point T1, and uses it for the next hour (or N megabytes or some other meaningless metric), during that time, potential errors in the Two Big States approximation are amplified. During that time potential reseeds are being lost, forgotten, not reflected in the output stream. That's not good.
The simplest statement you could make is that userspace RNGs that expand a getrandom() seed at some point T1 are nearly always worse, in some way, than just calling getrandom() every time a random number is desired.
For those reasons, after some discussion on libc-alpha, glibc's arc4random() now just calls getrandom() on each invocation. That's trivially safe, and gives us latitude to then make the safe thing faster without becoming unsafe at our leasure. Card shuffling isn't particularly fast, however.
How do we rectify this? By putting a safe implementation of getrandom() in the vDSO, which has access to whatever information a particular iteration of random.c is using to make its decisions. I use that careful language of "particular iteration of random.c", because the set of things that a vDSO getrandom() implementation might need for making decisions as good as the kernel's will likely change over time. This isn't just a matter of exporting certain data to userspace. We're not going to commit to a "data API" where the various heuristics used are exposed, locking in how the kernel works for decades to come, and then leave it to various userspaces to roll something on top and shoot themselves in the foot and have all sorts of complexity disasters. Rather, vDSO getrandom() is supposed to be the same exact algorithm that runs in the kernel, except it's been hoisted into userspace as much as possible. And so vDSO getrandom() and kernel getrandom() will always mirror each other hermetically.
API-wise, vDSO getrandom has this signature:
ssize_t getrandom(void **state, void *buffer, size_t len, unsigned long flags);
The return value and the latter 3 arguments are the same as ordinary getrandom(). The first argument is a double pointer to some state that vDSO allocates and manages. Call it first with *&my_state==NULL, and subsequently with the same &my_state, and only that first call will allocate. We very intentionally do not leave state memory management up to the caller. There are too many weird things that can go wrong, and it's important that vDSO does not provide too generic of a mechanism. It's not going to store its state in just any old memory address. It'll do it only in ones it allocates.
Right now this means it's a mlock'd page with WIPEONFORK set. In the future maybe there will be other interesting page flags or anti-heartbleed measures, or other platform-specific kernel-specific things that can be set. Again, it's important that the vDSO has a say in how this works rather than agreeing to operate on any old address; memory isn't neutral.
Because WIPEONFORK implies a whole page, vDSO getrandom() itself uses vDSO getcpu() in order to shard into various buckets, so that this remains fast from multiple threads.
The interesting meat of the implementation is in lib/vdso/getrandom.c, as generic C code, and it aims to mainly follow random.c's buffered fast key erasure logic. Before the RNG is initialized, it falls back to the syscall. Right now it uses a simple generation counter to make its decisions on reseeding; this covers many cases, but not all, so this RFC still has a little bit of improvement work to do. But it should give you the general idea.
The actual place that has the most work to do is in all of the other files. Most of the vDSO shared page infrastructure is centered around gettimeofday, and so the main structs are all in arrays for different timestamp types, and attached to time namespaces, and so forth. I've done the best I could to add onto this in an unintrusive way, but you'll notice almost immediately from glancing at the code that it still needs some untangling work. This also only works on x86 at the moment. I could certainly use a hand with this part.
So far in my test results, performance is pretty stellar, and it seems to be working. But this is very, very young, immature code, suitable for an RFC and no more, so expect dragons.
Cc: [email protected] Cc: [email protected] Cc: Nadia Heninger [email protected] Cc: Thomas Ristenpart [email protected] Cc: Theodore Ts'o [email protected] Cc: Vincenzo Frascino [email protected] Cc: Adhemerval Zanella Netto [email protected] Cc: Florian Weimer [email protected] Signed-off-by: Jason A. Donenfeld [email protected]
Fix the annoying part in the read me
i just fucking hate having to add haxelib install
to every fucking line in this
Patch for Head Pack 4 (#146)
-
Re-added overwritten head 54, now as Head 100. Over-wrote ugly old NWN head that nobody used because I didn't want to fuck with IDs
-
Adjusted scaling on Male Hum/Mir/Chi/Ech 50 & 51
bobmed 8 part 1.5 whats wrong with my brain holy fuck (!)
Manually copy trailing attributes on a resize (#12637)
This is a fairly naive fix for this bug. It's not terribly performant, but neither is resize in the first place.
When the buffer gets resized, typically we only copy the text up to the
MeasureRight
point, the last printable char in the row. Then we'd just
use the last char's attributes to fill the remainder of the row.
Instead, this PR changes how reflow behaves when it gets to the end of the row. After we finish copying text, then manually walk through the attributes at the end of the row, and copy them over. This ensures that cells that just have a colored space in them get copied into the new buffer as well, and we don't just blat the last character's attributes into the rest of the row. We'll do a similar thing once we get to the last printable char in the buffer, copying the remaining attributes.
This could DEFINITELY be more performant. I think this current implementation walks the attrs on every cell, then appends the new attrs to the new ATTR_ROW. That could be optimized by just using the actual iterator. The copy after the last printable char bit is also especially bad in this regard. That could likely be a blind copy - I just wanted to get this into the world.
Finally, we now copy the final attributes to the correct buffer: the new one. We used to copy them to the old buffer, which we were about to destroy.
I'll add more gifs in the morning, not enough time to finish spinning a release Terminal build with this tonight.
Closes #32 🎉🎉🎉🎉🎉🎉🎉🎉🎉 Closes #12567
(cherry picked from commit 855e1360c0ff810decf862f1d90e15b5f49e7bbd)
Loadout Update
General Description: This PR updates several loadouts for followers, wastelanders, far-landers, and the Redwater faction.
FOLLOWERS CHANGES
STARTING TEXT
- Starting text, including Description, Enforces and Forbids, have been edited to reflect the standard the server wants to see and has also omitted references to NCR and the Legion.
- For Admin and Guard there is slightly different text to match their job descriptions.
LOADOUTS
-
Removed CHEMWIZ trait which somehow fixed them not having CHEMWHIZ
-
Professors now have Loadouts, two robust options which provide new machine boards to the Followers when they join as part of their loadout. The two loadouts are Environmental Scientist who has hydroponic boards, and Medical Specialist who has a blood bank.
-
Specialists have some tweaks to make Medical Researcher more robust by adding a Bluespace syringe and an advanced health analyzer to their loadout.
-
Randomly it seems the Volunteer loadout which has tools and construction stuff, had a chemist PDA. This has been removed and now the weakest loadout, the Student, has a PDA.
-
Followers Guard had a tweak, the long range loadout has a scope, and the shotgun for the short range loadout has been changed to a police shotgun which is more inline with the aesthetic, and starts with bean bag rounds - though its total capacity is 6 as opposed to the previous 4.
These changes are intended to bring more value and encourage more players to participate in these roles. If you have any suggestions for additions, changes or subtractions let me know in the comments.
WASTELANDER CHANGES
LOADOUTS
Several additions and tweaks have been done to the Wastelander loadouts in order to properly reflect a myriad of playstyles.
-
Small changes include, changing a welding helmet to welding goggles, adding extra magazines where there was only 1, tweaking the settler loadout to be more settler-y by giving them glass instead of wood, giving them a more melee focused build that resemble tools, and adding seeds to their loadout. The Wasteland Medic has been tuned down with salts and surgery bag removed, and the Vaultie has lost their headset radio.
-
10 new Wastelander loadouts have been added, the Gambler which has a lot of interesting RP items, the Vaquero which allows players to explore another aesthetic in-line with the South West, the Hobo loadout for those who want a challenge, the Hombre loadout to replace the desert ranger one so it is more in-line with our current lore and to get away from New Vegas, Ex-Military for those who want to LARP as a soldier or mercenary, a brand new Brotherhood of Steel waster loadout that does not have grenades and is more balanced with other waster loadouts, Eidelon loadout for those who wish to be sneaky and slightly Russian if they so wish, the Aviator loadout to allow players some options to have that air pilot aesthetic, the Trapper loadout for that CLASSIC CLASSIC Fallout experience, and finally the Trouper loadout for all of the bottoms out there.
FAR-LANDER CHANGES
-
I have created a whole new set of traits and it took a lot of work, having to do multiple things seven times over and over to make a book that allows you to pick from a list a traitbook which makes it so you can craft one of the seven different former tribes armor and garments. Rejoice!
-
The loadouts for Far-Landers have been reduced from 21-3 to 5. To those who wish for aesthetic or loadout options which have been omitted from this decision, let me know and I can tweak some things or add another class since I removed a lot, but they must be more generalized so that specific tribes arent tied to actual loadout options.
REDWATER CHANGES
-
Redwater Slave, my favourite job, no longer has explosive collars; they are now shock collars. Aww man, I wanted to be round ended.
-
A whole new job was added called Redwater Resident. They will be in charge of supervising and protecting Redwater, and act as inhabitants of the town; they may travel outside of the town to gather materials but otherwise should be staying in the area and around town. They are equipped in quite a robust manner, so anyone who dares to battle the town better be geared to the teeth.
-
The Overboss keeps spawning in Nash naked. I fixed this. They now have clothes, and also spawn in Redwater.
Store macro metadata in the cdylib file
The nice thing about this system is that the metadata is always bundled
together with the build output. This makes it easier to ensure that the
generated scaffolding matches up with the dylib that it's going to be link to.
This avoids the work that rebuild_metadata()
needed to do. Metadata
is serialized with bincode to keep the binary size reasonable.
The downside is that we need to parse a dylib, which feels slightly
risky. However, it seems less risky overall to me, since we don't have
to worry about tracking the JSON files -- especially after fixing the
recent the sccache issue. Also, extracting the symbol data with the
goblin crate is not that bad, see macro_metadata/extract.rs
for how
it's done.
In order to use the macro metadata, you now need to specify --cdylib [path-to-library]
when running uniffi-bindgen
. This is annoying, but it
will be simpler once the proc-macros can handle all parts of the interface.
At that point, we can make uniffi-bindgen
accept either a UDL path or a cdylib
path as it's positional arg.
I didn't add support for external bindings to pass in a cdylib path, since
adding an argument to that function would be a breaking change, then we would
need to do another breaking change to make the param udl_or_cdylib_file
. If
external bindings really want to, they can call
uniffi_bindgen::interface::macro_metadata::add_to_ci
directly.
Added the uniffi-bindgen dump-json
command that inputs a cdylib path and
prints the matadata as JSON.
I tested that dump-json
works properly on the following targets:
- x86_64-unknown-linux-gnu (ELF 64-bit)
- i686-unknown-linux-gnu (ELF 32-bit)
- x86_64-apple-darwin (Mach-O 64-bit)
- x86_64-pc-windows-gnu (DLL 64-bit)
- i686-pc-windows-gnu (DLL 32-bit)
This seems like good enough coverage to me, although there are a lot of other systems that would be nice to test on. The limiting factor was setting up the cross-compilation toolchains on my machine. Maybe we should add some more CI platforms that just run macro-metadata-related tests.
Updated the testing code to pass the cydlib path, rather than the directory that contains it.
Added an enum that covers all Metadata types. This is what we serialize in the cdylib.
fuck fuck you fuck you DIE DIE DIE I'M EVILgit statusgit status
I HATE LIVERMED, I HATE LIVERMED, I HATE LIVERMED!!! (#3714)
- Makes combat medical kits better
- Replaces Dylovene pill bottle on Combat Medical Kit with Carthatoline pill bottle, as every chem inside it already WAS an upgrade over their normal counterparts, making it better at halting toxins damage and preventing liver from killing you. Also adds a Sanguinum syrette to stave off massive bloodloss which would cause the former as well.
- Replaces one of the Quicklot syrettes with a Sanguinum syrette on the Oxygen Deprivation First Aid Kit for better treatment of causes of oxyloss.
- Standardizes pill icons based on chem colors across all pre-built pills for easier reognition.
- Guarantees the "skill issue/salt PR" tag since it doesn't fix underlying issues of current medical system
- Adds carthatoline pills to deferred and corpsman large kit
Keeping in line with the rest of the PR.
- Blood regen pills!
- Adds pre-made Ferritin-Hemosiderin pills composed of iron and protein to help regenerate lost blood
- Replaces Sanguinum syrette on combat kit with ferritin-hemosiderin pill bottle
- Combat surgery kits now really hold advanced tools (except bone gel since the adv version is Soteria made)
- Makes the advanced bone gel item description not a copypaste of its stock counterpart
- Forgot a comma
Damn my haste.