1,948,569 events, 995,320 push events, 1,598,713 commit messages, 123,774,825 characters
Updates all Silicon Law related material costs (#48801)
About The Pull Request
The AI/Cyborg uploads, and AI modules, now require bluespace crystals to construct. Why It's Good For The Game
Before you say anything I want you to consider the following:
You are an antagonist and you want to subvert the AI. Do you, A) Break-in to the AI Upload, an extremely risky, obvious method but you have everything you need to subvert the AI without any extra steps? B) Break-in to secure tech storage, a place with signficiantly less security and not as often checked on, but have to find another way to acquire the modules you need? C) Wait a good 15-25 minutes for mining and science to do the bare minimum of their job, enter in the back way and print off everything you need with very little risk or leaving behind very little evidence?
If you answered anything but C, I like you and you can fuck my sister but let's be realistic here. It's not going to """fix""" the issue and I won't pretend it will, but with the rarity of bluespace crystals it does mean that they're not going to be as prevalent or as available as early and may force you to take the riskier strategies if the resources you need don't appear quickly enough and encourage conflict. Changelog
🆑 balance: AI and Cyborg Upload consoles require bluespace crystals and diamond to print now balance: Law modules now require bluespace crystals to print now /🆑
More cats = Better world
What do cats like to eat on a hot day? A mice-cream cone! Why do cats always get their way? They are very purr-suasive! How do two cats end a fight? They hiss and make up! What should you use to comb a cat? A catacomb! What is a cat's favorite movie? The Sound of Mewsic! How do you know a cat is agitated? He's having a hissy fit! What's a cat's favorite magazine? Good Mousekeeping! Why did the cat wear a fancy dress? She was feline fine! What's a cat's favorite color? Purr-ple! Why was the cat afraid of the tree? Because of its bark! What did the cat say when it was confused? "I'm purr-plexed!" What's a cat's favorite dessert? Chocolate mouse! Where does a cat go when it loses its tail? The re-tail store! What do you call a cat who lives in an igloo? An eskimew! How do cats stop crimes? They call claw enforcement! Why was the cat so agitated? Because he was in a bad mewd! What do you call a cat who loves to bowl? An alley cats! What do cats love to do in the morning? Read the mewspaper! How is cat food sold? Usually, purr the can! What do baby cats always wear? Diapurrs!
Created Text For URL [www.the-star.co.ke/counties/central/2020-01-22-estranged-lover-kills-girlfriend-with-her-mothers-kitchen-knife/]
Pull up following revision(s) (requested by maxv in ticket #817):
sys/net/npf/npf_inet.c: revision 1.38-1.44
sys/net/npf/npf_handler.c: revision 1.38-1.39
sys/net/npf/npf_alg_icmp.c: revision 1.26
sys/net/npf/npf.h: revision 1.56
sys/net/npf/npf_sendpkt.c: revision 1.17-1.18
Declare NPC_FMTERR, and use it to kick malformed packets. Several sanity checks are added in IPv6; after we see the first IPPROTO_FRAGMENT header, we are allowed to fail to advance, otherwise we kick the packet. Sent on tech-net@ a few days ago, no response, but I'm committing it now anyway.
Switch nptr to uint8_t, and use nbuf_ensure_contig. Makes us use fewer magic values.
Remove dead branches, 'npc' can't be NULL (and it is dereferenced earlier).
Fix two consecutive mistakes.
The first mistake was npf_inet.c rev1.37: "Don't reassemble ipv6 fragments, instead treat the first fragment as a regular packet (subject to filtering rules), and pass subsequent fragments in the same group unconditionally."
Doing this was entirely wrong, because then a packet just had to push the L4 payload in a secondary fragment, and NPF wouldn't apply rules on it - meaning any IPv6 packet could bypass >=L4 filtering. This mistake was supposed to be a fix for the second mistake.
The second mistake was that ip6_reass_packet (in npf_reassembly) was getting called with npc->npc_hlen. But npc_hlen pointed to the last encountered header in the IPv6 chain, which was not necessarily the fragment header. So ip6_reass_packet was given garbage, and would fail, resulting in the packet getting kicked. So basically IPv6 was broken by NPF.
The first mistake is reverted, and the second one is fixed by doing:
-
hlen = sizeof(struct ip6_frag);
-
hlen = 0;
Now the iteration stops on the fragment header, and the call to ip6_reass_packet is valid.
My npf_inet.c rev1.38 is partially reverted: we don't need to worry about failing properly to advance; once the packet is reassembled npf_cache_ip gets called again, and this time the whole chain should be there.
Tested with a simple UDPv6 server - send a 3000-byte-sized buffer, the packet gets correctly reassembled by NPF now.
Mmh, put back the RFC6946 check (about dummy fragments), otherwise NPF is not happy in npf_reassembly, because NPC_IPFRAG is again returned after the packet was reassembled.
I'm wondering whether it would not be better to just remove the fragment header in frag6_input directly.
Fix the "return-rst" rule on IPv6 packets. The scopes needed to be set on the addresses before invoking ip6_output, because ip6_output needs them. The reason they are not here already is because pfil_run_hooks (in ip6_input) is called before the kernel initializes the scopes.
Until now ip6_output was always failing, and the IPv6-TCP-RST packet was never actually sent.
Perhaps it would be better to have the kernel initialize the scopes before invoking pfil_run_hooks, but several things will need to be fixed in several places.
Tested with a simple TCPv6 server. Until now the client would block waiting for an answer that never came; now it receives an RST right away and closes the connection, as expected. I believe that the same problem exists in the "return-icmp" rules, but I can't investigate this right now (some problems with wireshark).
Fix the IPv6 payload computation in npf_tcpsaw. It was incorrect, and this caused the "return-rst" rules to send back an RST with the wrong ACK when the received SYN had an IPv6 option.
Set the scopes before calling icmp6_error(). This fixes a bug similar to the one I fixed in rev1.17: since the scopes were not set the packet was never actually sent.
Tested with wireshark, now the ICMPv6 reply is correctly sent, as expected.
Don't read the L4 payload after IPPROTO_AH when handling IPv6 packets. AH must be considered as the payload, otherwise a
block all
pass in proto ah from any
pass out proto ah from any
configuration will actually block everything, because NPF checks the protocol against the one found after AH, and not AH itself.
In addition it may have been a problem for stateful connections; an AH packet sent by an attacker with an incorrect authentication and a correct TCP/UDP/whatever payload from an active connection could manage to change NPF's FSM state, which would perhaps have altered the legitimate connection with the authenticated remote IPsec host.
Note that IPv4 already doesn't go beyond AH, which is the correct behavior.
Add XXX (we don't handle IPv6 Jumbograms), and whitespace.
Add workaround typedefs for awful hack.
XXX: It's completely unacceptable for me to refer libsa files from userland. XXX: Nowadays we no longer have serious size restriction in install media, XXX: so I think it's much better to simply remove this ugly SMALLPROG hacks. XXX: If you really want to share files, please move them into src/common XXX: with defined APIs.
Redone sounds fuck you pyko
jk i love you pyko but these sounds hurt me inside also new roundstart i guess
High Church Bon tweaks
-
BIGGEST CHANGE HERE is setting up Bon w/ papal mechanics. We'll need to do some fiddly stuff to get that looking right. Also I can't remember for sure if we even wanted to do it? I'm pretty sure we do?? I can undo that if necessary tho -- for real this isn't even a little bit Done. this was very quick and dirty
-
Moved Bon to Eastern religion group so that it will view Buddhism as heretical instead of heathen
-
Bunch of minor changes to government/ruler names and shit for the Sacred Hierarchy
"9am. I am up. I actually woke up earlier, but have been indulging myself in my imagination.
If there is a direction I want to change, it would be to take my imagination more seriously.
Deep down, I've been too laid back and skeptical.
This is reflected in the last 5 years. Spiral since its inception was always an expression of both my belief and skepticism in ML. But I did not channel this skepticism properly. A really common pattern in my explorations is that I would figure out some big optimization like KFAC, Zap, that cumulative eligibility trace update, and then find that they are local minima in terms of design. Even minor architectural changes break them.
In the end, I can only count of plain old backprop.
And starting from 2015, ever since the discrete optimization course, I did not really understand why anything other that point optimizations were necessary. Even now I do not have an absolute view I could point to, but nonetheless the hints are everywhere that diversity is the key to uncertainty estimation.
Back in 2015, I thought that there would be something smarter than multiple random inits, but I guess not.
9:10am. This time I really want to do so much more.
And conversely, this is the last time I will ever work on a language. I swear it. V0.2 is the last time I will ever put so much effort into this because honestly, I had enough. I do not want to end up in a situation where in 2030 I both still a pure human and still fiddling with this donkey shit.
Programs are crystallizations of understanding, so I am going to finish it here.
9:35pm. Ok, enough slacking. Let me see whether I can get some work done. I need to start work on partial evaluation. Just two more years of this and then I am done for good.
When v0.2 is done, I will have no more refactoring or compilation speed issues anymore. It needs to be done.
9:50am. Yeah, it is going to be hard going to start. Right now rather than programming I am posting on /sci/. For the past few months my frequency of posting has been way up. From once a year on average to a few times per week.
...Ah, let me chill for a while longer and then I will start."
#258 change requests from copyeditor implemented
Hi @amsichani and @acrymble I really love the change requests made by the copyeditor. As a non native speaker this is a blessing! I corrected everything according to the notes. The last section (Mueller Report) needs some more attention. Thanks a lot
Fix a lovely UnicodeDecodeError
When invoking a shell command you need to
- invoke the shell command with the proper LANG environment set
- encode the command according to above setting
In other words you need to encode the strings in the command before shipping it to the subprocess.
If you neglect to do that then Python (2) will implicitly encode the command with the ascii encoding which will of course fail if you have anything but ascii in your command.
See also https://nedbatchelder.com/text/unipain.html for a very enlightening treatise of this painful subject.
"10:15am. Let me finally start this thing.
First of all - types.
and TypedData =
| TyList of TypedData list
| TyKeyword of KeywordTag * TypedData []
| TyFunction of Expr * StackSize * EnvTerm
| TyRecFunction of Expr * StackSize * EnvTerm
| TyObject of ObjectDict * EnvTerm
| TyMap of MapTerm
| TyT of ConsedTy
| TyV of TyTag
| TyBox of TypedData * ConsedTy
| TyLit of Value
I need to make this into something sensible. I already said I would removing consing and replace it reference memoization.
10:20am. The pevaller is a big piece to deal with, so let me do it slowly, one at a time.
Focus me. Let me just do the types and then I'll take a proper break.
10:40am.
module Spiral.PartEval
open Spiral.Prepass
type Env<'a,'b> = {type' : StackSize * 'a []; value : StackSize * 'b []}
type LayoutType =
| LayoutStack
| LayoutHeap
| LayoutHeapMutable
type Ty =
| PairT of Ty * Ty
| KeywordT of KeywordTag * Ty []
| FunctionT of Expr * StackSize * Env<Ty, Ty>
| RecordT of Map<KeywordTag, Ty>
| PrimT of Parsing.PrimitiveType
| LayoutT of LayoutType * Data
| ArrayT of Ty
| RuntimeFunctionT of Ty * Ty
| MacroT of Data
and Data =
| TyPair of Data * Data
| TyKeyword of KeywordTag * Data []
| TyFunction of Expr * Env<Ty, Data>
| TyRecord of Map<KeywordTag, Data>
| TyLit of Tokenize.Value
| TyV of int * Ty
| TyRef of int // For use in join points, layout types and macros
I am missing union types here, but apart from that this is excellent. This time I do not even need recursive functions. They will all be there in the ...
Ah, I just realized. It is not just renaming that I have to watch out for cycles, but when returning from regular join points as well. And in dynamize too.
And in destructure. Pretty much every function that iterates over the Data and Ty now has to do memoization by reference.
11am. Right now I am thinking how I am going to handle that.
...There is no way getting around it. I am going to have to ditch the elegant memoize and take care of the troublesome cases by hand.
11:05am. Forget about this. Let me do the types for the codegen.
11:35am. Ah, fuck. I am still thinking about the recursive memoizer. I am trying to find a way to simplify it, but I can't come up with anything.
Without a doubt, having cycles really makes things more difficult.
let keyword_env = string_to_keyword "env:" // For join points keys. It is assumed that they will never be printed.
let keyword_apply = string_to_keyword "apply:"
let keyword_key_value = string_to_keyword "key:value:"
let keyword_key_state_value = string_to_keyword "key:state:value:"
let keyword_text = string_to_keyword "text:"
let keyword_variable = string_to_keyword "variable:"
let keyword_literal = string_to_keyword "literal:"
let keyword_type = string_to_keyword "type:"
What is all this. Nevermind, I will remove this.
11:40am.
let typed_data_to_consed' call_data =
let dict = Dictionary(HashIdentity.Reference)
let call_args = ResizeArray(64)
let rec f x =
memoize dict (function
| TyList l -> List.map f l |> ctylist
| TyKeyword(a,b) -> (a,Array.map f b) |> ctykeyword
| TyFunction(a,b,c) -> (a,b,Array.map f c) |> ctyfunction
| TyRecFunction(a,b,c) -> (a,b,Array.map f c) |> ctyrecfunction
| TyObject(a,b) -> (a,Array.map f b) |> ctyobject
| TyMap l -> Map.map (fun _ -> f) l |> ctymap
| TyV(T(_,ty) as t) -> call_args.Add t; CTyV (call_args.Count-1, ty)
| TyBox(a,b) -> (f a, b) |> CTyBox
| TyLit x -> CTyLit x
| TyT x -> CTyT x
) x
let x = f call_data
call_args.ToArray(),x
As expected the very first function I have to do is this one.
Ok, I will handle it.
But I feel a bit bad about removing consed annotations.
I am going to do a little trick.
12pm.
| TyFunction(b,c) ->
let v_ar = Array.zeroCreate (snd c.value).Length
let r = TyFunction(b,{c with value=fst c.value, v_ar})
Array.iter (fu)
Holy shit, this is even more annoying than I thought it would be.
| TyFunction(a,b,c,d,e) ->
let e' = Array.zeroCreate e.Length
let r = TyFunction(a,b,c,d,e')
Array.iteri (fun i x -> e'.[i] <- f x) e
r
Let me do it like this.
12:20pm.
// Memoizing map for cyclical structure
let inline memoize_rec (dict : Dictionary<_,_>) (e : _ []) ret f x =
match dict.TryGetValue x with
| true, v -> v
| _ ->
let e' = Array.zeroCreate e.Length
let r = ret e'
dict.Add(x,r)
Array.iteri (fun i x -> e'.[i] <- f x) e
r
/// Unlike v0.1 and previously, the functions can now have cycles so that needs to be taken care of during memoization.
let typed_data_to_renamed' call_data =
let dict = Dictionary(HashIdentity.Reference)
let call_args = ResizeArray(64)
let rec f x =
let memoize f = memoize dict f x
let memoize_rec e ret f = memoize_rec dict e ret f x
match x with
| TyPair(a,b) -> memoize (fun _ -> TyPair(f a, f b))
| TyKeyword(a,b) -> memoize (fun _ -> TyKeyword(a, Array.map f b))
| TyFunction(a,b,c,d,e) -> memoize_rec e (fun e' -> TyFunction(a,b,c,d,e')) f
| TyRecord l -> memoize (fun _ -> TyRecord(Map.map (fun _ -> f) l))
| TyV(T(_,ty) as t) -> memoize (fun _ -> call_args.Add t; TyV(T(call_args.Count-1, ty)))
| TyLit x -> memoize (fun _ -> TyLit x)
| TyRef _ -> failwith "Compiler error"
let x = f call_data
call_args.ToArray(),x
This is so annoying, but I am into it. I wish I did not have to use T
in TyV
, but it will do.
...And I haven't done what this things asks which is use TyRef.
I've been thinking about the join point returns and have done the thing correctly as a result. Whops.
12:25pm. Forget it, I really need that break here."
[FIX] runbot_merge: disable homepage added enabled by website
Pages take over from redirections which really is a pain in the ass when trying to find out why the bloody redirection seemingly refuses to work.
Note: can't use the record tag because homepage_page is marked as noupdate, so we have to bypass the flag checking.
BlockChain: damn this shit is not that hard for now
"3:05pm. Damn this took a while. To make matters worse, it was not like I was doing anything in particular.
This is not like the time I was immersed in reading the last few weeks. Right now that I am in this programming mindset I can't really enjoy fiction as I usually would.
And yet, I do not feel like programming either. So it is the same as usual for me.
I'll get into it once I start.
First off, let me do the two main functions ty_to_data
and data_to_ty
. Then I will do those consing functions again, this time properly.
let rec destructure tyv_or_tyt x =
let inline f x = destructure tyv_or_tyt x
match x with
| ListT x -> TyList(List.map f x.node)
| KeywordT(C(a,l)) -> TyKeyword(a,Array.map f l)
| FunctionT(C(a,b,l)) -> TyFunction(a,b,Array.map f l)
| RecFunctionT(C(a,b,l)) -> TyRecFunction(a,b,Array.map f l)
| ObjectT(C(a,l)) -> TyObject(a,Array.map f l)
| MapT l -> TyMap(Map.map (fun _ -> f) l.node)
| x -> tyv_or_tyt x
let rec tyv = function
| ListT x -> TyList(List.map tyv x.node)
| KeywordT(C(a,l)) -> TyKeyword(a,Array.map tyv l)
| FunctionT(C(a,b,l)) -> TyFunction(a,b,Array.map tyv l)
| RecFunctionT(C(a,b,l)) -> TyRecFunction(a,b,Array.map tyv l)
| ObjectT(C(a,l)) -> TyObject(a,Array.map tyv l)
| MapT l -> TyMap(Map.map (fun _ -> tyv) l.node)
| LayoutT(C (_,_,true)) | UnionT _ | RecUnionT _ | MacroT _ | TermCastedFunctionT _ | PrimT _ as ty -> TyV(T(tag(), ty))
| ArrayT(_,l) as ty -> if type_is_unit l then TyT ty else TyV(T(tag(), ty))
| LayoutT _ as ty -> TyT ty
let rec tyt = function
| ListT x -> TyList(List.map tyt x.node)
| KeywordT(C(a,l)) -> TyKeyword(a,Array.map tyt l)
| FunctionT(C(a,b,l)) -> TyFunction(a,b,Array.map tyt l)
| RecFunctionT(C(a,b,l)) -> TyRecFunction(a,b,Array.map tyt l)
| ObjectT(C(a,l)) -> TyObject(a,Array.map tyt l)
| MapT l -> TyMap(Map.map (fun _ -> tyt) l.node)
| ArrayT _ | LayoutT _ | UnionT _ | RecUnionT _ | MacroT _ | TermCastedFunctionT _ | PrimT _ as ty -> TyT ty
Did I forget to remove destructure
or did I leave it in on purpose? I can't remember.
3:10pm. Focus me focus. Just do these functions. I'll think about the rest later.
This won't be done that easily. It will take me a while to get into the swing of things and go through all of this.
3:20pm.
let ty_to_data i t =
let d = Dictionary(HashIdentity.Reference)
let rec f = function
| PairT(a,b) -> TyPair(f a, f b)
| KeywordT(a,b) -> TyKeyword(a,Array.map f b)
| FunctionT(a,b,c,d,e) -> TyFunction(a,b,c,d,Array.map f e)
Oh, this is already an interesting case. It might seem straightforward - one might say that I just need to memoize here, but...
The issue here is that I actually need to ignore references in the parallel parts I only have to memoize nested references.
Something like join x,x
where x
is a recursive function should not return the same thing everywhere.
3:35pm.
| LayoutT(C (_,_,true)) | UnionT _ | RecUnionT _ | MacroT _ | TermCastedFunctionT _ | PrimT _ as ty -> TyV(T(tag(), ty))
| ArrayT(_,l) as ty -> if type_is_unit l then TyT ty else TyV(T(tag(), ty))
| LayoutT _ as ty -> TyT ty
Previously I had this checking whether something is a unit. This time I have no need for this sort of confusion at all.
3:40pm.
let ty_to_data i x =
let m = Dictionary(HashIdentity.Reference)
let rec f x =
match x with
| PairT(a,b) -> TyPair(f a, f b)
| KeywordT(a,b) -> TyKeyword(a,Array.map f b)
| FunctionT(a,b,c,d,e) ->
match m.TryGetValue x with
| true, v -> v
| _ ->
let e' = Array.zeroCreate e.Length
let r = TyFunction(a,b,c,d,e')
m.Add(x,r)
Array.iteri (fun i x -> e'.[i] <- f x) e
m.Remove(x) |> ignore // Non-nested mapping should not share vars
r
| RecordT l -> TyRecord(Map.map (fun k -> f) l)
| PrimT _ | LayoutT _ | ArrayT _ | RuntimeFunctionT _ | MacroT _ -> let r = TyV(T(!i,x)) in i := !i+1; r
f x
Here is the first of the bunch. This will do nicely.
What else is there? Let me go to the next one.
3:50pm.
open Spiral.Tokenize
open Spiral.Parsing
let value_to_ty = function
| LitUInt8 _ -> PrimT UInt8T
| LitUInt16 _ -> PrimT UInt16T
| LitUInt32 _ -> PrimT UInt32T
| LitUInt64 _ -> PrimT UInt64T
| LitInt8 _ -> PrimT Int8T
| LitInt16 _ -> PrimT Int16T
| LitInt32 _ -> PrimT Int32T
| LitInt64 _ -> PrimT Int64T
| LitFloat32 _ -> PrimT Float32T
| LitFloat64 _ -> PrimT Float64T
| LitBool _ -> PrimT BoolT
| LitString _ -> PrimT StringT
| LitChar _ -> PrimT CharT
This advice of not putting types all in one module is really not working for me.
Can't I do selective imports of types?
Also that reminds me, in module open I've forgotten all about types.
Maybe later it might be good to add open type
to the language?
4:05pm. Of, I am completely distracted.
open Spiral.Tokenize
open Spiral.Parsing
let value_to_ty = function
| LitUInt8 _ -> PrimT UInt8T
| LitUInt16 _ -> PrimT UInt16T
| LitUInt32 _ -> PrimT UInt32T
| LitUInt64 _ -> PrimT UInt64T
| LitInt8 _ -> PrimT Int8T
| LitInt16 _ -> PrimT Int16T
| LitInt32 _ -> PrimT Int32T
| LitInt64 _ -> PrimT Int64T
| LitFloat32 _ -> PrimT Float32T
| LitFloat64 _ -> PrimT Float64T
| LitBool _ -> PrimT BoolT
| LitString _ -> PrimT StringT
| LitChar _ -> PrimT CharT
let data_to_ty x =
let m = Dictionary(HashIdentity.Reference)
let rec f x =
let memoize f = memoize m f x
let memoize_rec e ret f = memoize_rec m e ret f x
match x with
| TyPair(a,b) -> memoize (fun _ -> PairT(f a, f b))
| TyKeyword(a,b) -> memoize (fun _ -> KeywordT(a, Array.map f b))
| TyFunction(a,b,c,d,e) -> memoize_rec e (fun e' -> FunctionT(a,b,c,d,e')) f
| TyRecord l -> memoize (fun _ -> RecordT(Map.map (fun _ -> f) l))
| TyV(T(_,ty) as t) -> ty
| TyLit x -> value_to_ty x
| TyRef _ -> failwith "Compiler error"
f x
Well, I did do this, but now I am asking questions such as 'how do I rename type variables in module open'.
Just like with keywords, I am going to have to assign tags to every nominal type just to make sure there is no confusion.
Agh. Let me take a little break here."
Create Xor_Segment.py
All submissions for this problem are available.Xenon loves XORs and thus he has given his friend Subash a challenge to xor all values from L to R inclusive. That is L⊕(L+1)⊕(L+2).....R=Z Subash is on a date with his new girlfriend, will you be his helping hand to find Z?
Includes today's pack run attempt.
I was very busy programming yesterday and decided to set my new Tioga up this morning between when I'd get up and when I'd get out the door. Well, I wound up spending a bunch of extra time dealing with a dirty kitchen and a recalcitrant daughter and then when I did unbox the Tioga I saw that the shoulder straps were on the lower bar rather than the upper bar. So ...
... I decided to experiment and keep the straps where they were and while I was at it, I'd wrap a 30# dumbbell in a towel and use that for weight. However, when I went to put the dumbbell in, I tried putting it in the lowest compartment to avoid stress on the tensioning bars, but it shifted and it was clear it would not stay put where it was, so after a bunch of time futzing around I said "fuck it", I'm not doing a pack run today. I'll just go back to programming.
I found I couldn't really program well. I was too wired, which I had attributed to my having had four double espressos, so eventually I said "fuck it" and put my pack on and headed out the door. Immediately it was bouncing as I ran, so I tried adjusting my gait and pace to make it bounce less. Nothing really worked, per-se. Eventually I tightened both shoulder straps quite a bit and that made things different, but not better.
I decided to stop running once I got to the half-way point (5.5 miles) and then to walk home the most direct route. As such, my total mileage was 7.67 miles. Only after I got home did I realize that I had accidentally had twice the amount of caffeine at 5:15 than I had planned (4 double espressos instead of 2).
I'm a bit beaten up from the misconfigured pack and I'm pretty fried from the excess caffeine (I still had 2 more double espressos at 6:30). OTOH, I have until Monday to get my pack configured properly and I think I took pictures of the old settings before I threw away the remains of my previous Tioga.
Add support for btrfs sends stream
Summary:
This adds support to tupperware agent to handle both subvolumes as they are created now and with sendstreams.
This is complementary to the installer change.
This was incredibly painful to test, mostly due to deficencies in how we test provisioning and chef changes, so if you've been wondering what did claudioz@ do with sendstreams? It was mostly babysitting cyborg jobs and making chef/provisioning changes just to test stuff like this without going mad.
This was horrible and I hope to improvie it (some work has been already put in) or never have to do it again in my lifetime.
Reviewed By: joshuamiller01
Differential Revision: D19102184
fbshipit-source-id: 7e28e98b1bd2d49a633e75bf5f5b173635fc5207
And god said, LET THERE BE LIGHT! In the brig.
Adds lighting fixtures to brig: how i missed this, i have no fucking clue. May do stupid shit idk there's exclamation marks on the folders but nothing on the files
Product Managers, to pair or not to pair?
Product Managers, to pair or not to pair? Speaker : Mihaela Draghici Available : first day, second day, third day Length : 45 minutes Language : English
Description Just as, for engineers, pair programming means two engineers working together, PM pairing refers to two Product Managers working together as part of the same product team.
However, there are little resources available regarding pairing for Product Managers. That's because it's not too common.
What does pairing really mean for PMs? How is it different from pair programming? Why is it great? Why does it suck? When is it good to pair? And more than that, how to make the most of it when you do pair? I aim to share my lessons learned and tips & tricks about pairing for PMs (from own experience), to give you more insights on this topic.
Speaker Bio A Romanian, with a British passport, living in Lisbon. Marketing girl, turned tech girl, building products that deliver happiness. Strong advocate for gender equality and inclusive work environments in tech industries.
Links Company: https://www.linkedin.com/company/volkswagen-digital-solutions/ GitHub: https://github.com/mmihaeladraghici/ Photo: https://pbs.twimg.com/profile_images/1271157175/mihaela-draghici1_400x400.jpg Extra Information The purpose of this talk is to offer more information regarding pairing practices for Product Managers to to help:
those who have not heard of pairing for PMs, learn more about it; those who have heard about it but have not considered pairing in their product teams: start considering it and weigh options around benefits for their products/business; those who have tried it: open a conversation about advantages/disadvantages and exchange knowledge. I have over 10 years of international work experience in digital marketing and product management. I have contributed to both small and large scale product initiatives to bring business value and worked closely with engineering teams, UX designers, localisation specialists as well as business stakeholders across regions. I am passionate about what I do, the products I've built and the people I work with. Since I have learned a lot throughout my career and benefited from great colleagues and mentors, I am looking to share my knowledge and give back to the community through talks, events & mentoring.
Some references to myself and my work:
https://www.facebook.com/GirlsinTechLondon/ https://www.youtube.com/watch?v=VSHNdMvXlUY https://www.eventbrite.co.uk/e/getting-into-tech-tech-spectrum-panel-for-graduates-tickets-50451371410# https://issuu.com/businesswomanclub/docs/april_digital_business_women_emagaz/60 https://www.eventbrite.co.uk/e/in-common-women-at-work-tickets-65778833261# https://www.linkedin.com/pulse/10-things-i-love-being-product-manager-mihaela-draghici https://performancein.com/news/2019/06/18/awin-thinktankuk-2019-and-future-proofing-affiliate-industry/
New Extension attributes
Holy cow, a commit! It's my first one in over a year and it's a boring one..and boy, I am rusty!
New EAs created to scan the Users folder to find VM preference files and try to extrapolate the guest OS based on some settings.
It's ugly, but it works for what I need right now.
Revert "x86/apic: Include the LDR when clearing out APIC registers"
[ Upstream commit 950b07c14e8c59444e2359f15fd70ed5112e11a0 ]
This reverts commit 558682b5291937a70748d36fd9ba757fb25b99ae.
Chris Wilson reports that it breaks his CPU hotplug test scripts. In particular, it breaks offlining and then re-onlining the boot CPU, which we treat specially (and the BIOS does too).
The symptoms are that we can offline the CPU, but it then does not come back online again:
smpboot: CPU 0 is now offline
smpboot: Booting Node 0 Processor 0 APIC 0x0
smpboot: do_boot_cpu failed(-1) to wakeup CPU#0
Thomas says he knows why it's broken (my personal suspicion: our magic handling of the "cpu0_logical_apicid" thing), but for 5.3 the right fix is to just revert it, since we've never touched the LDR bits before, and it's not worth the risk to do anything else at this stage.
[ Hotpluging of the boot CPU is special anyway, and should be off by default. See the "BOOTPARAM_HOTPLUG_CPU0" config option and the cpu0_hotplug kernel parameter.
In general you should not do it, and it has various known limitations (hibernate and suspend require the boot CPU, for example).
But it should work, even if the boot CPU is special and needs careful treatment - Linus ]
Link: https://lore.kernel.org/lkml/156785100521.13300.14461504732265570003@skylake-alporthouse-com/ Reported-by: Chris Wilson [email protected] Acked-by: Thomas Gleixner [email protected] Cc: Bandan Das [email protected] Signed-off-by: Linus Torvalds [email protected] Signed-off-by: Sasha Levin [email protected]
SSS Rework FOMOD GUI
Depends on inf-wx-begone, rewrites most of the GUI to use the wrappers instead. Drops a whole bunch of wx usages, which is nice. RadioButton needs wrapping, see all the ugly hacks at the bottom of gui_fomod. Also, the design that uses dict of wx objects to store group objects has to go, it's fundamentally hacky and very fragile - e.g. imagine if the wx guys decided to add slots to their objects.
Also contains a bunch of fixes and misc improvements, e.g. user-facing strings have been made translatable, some bugs that were carried over from belt have been fixed, and the 'Back' button no longer works on the first page.
Note the glaring TODOs - this is a straight up port of the original GUI, but we currently don't have a way to change fonts, which the original GUI relied on to differentiate its components. I added some HBoxedLayouts as an alternative, which works fine for the main FOMOD dialog and may even be an improvement in terms of visual clarity, but doesn't help at all with the results screen, which is now an unreadable mess.
Utumno: fomod_gui: comments to docstrings
Co-authored-by: MrD [email protected]