Skip to content

Latest commit

 

History

History
666 lines (416 loc) · 30.4 KB

2020-09-29.md

File metadata and controls

666 lines (416 loc) · 30.4 KB

< 2020-09-29 >

2,829,451 events, 1,465,572 push events, 2,232,406 commit messages, 168,259,166 characters

Tuesday 2020-09-29 06:51:07 by Chris Koch

fix the double fault with a shitty RSP/RBP context switch.

This is the ugliest, worst piece of shit code I've ever written.

In the original version of this EFI implementation, we jump into the EFI app with the Linux kernel's current page tables and pointers. RSP and RBP point to a virtual address in high memory, i.e. something like 0xffffc90012345678. EFI applications in boot services expect all pointers they are given to be 1:1 virt-to-phys mapped. The given RSP and RBP obviously aren't.

Eventually, the Windows boot manager notices this and maps RSP/RBP to their 1:1 location in the set of page tables it runs under. However, when we return to Linux, and when Linux decides to switch to another set of page tables internally, the stack pointer suddently points nowhere and we double-page-fault.

So what do?

Upon jumping to the EFI app, find out where our current stack is physically located, add 1:1 mappings for the stack, and adjust RSP and RBP to the 1:1 mapped location.

Upon coming back from the EFI app into Linux, reset the stack pointer to the kernel-space virtual memory that is the "direct mapping of all physical memory" at 0xffff888000000000.

See https://www.kernel.org/doc/Documentation/x86/x86_64/mm.txt

RSP before EFI and after EFI are not the same. The original Linux RSP comes from the "vmalloc/ioremap space (vmalloc_base)" space, so the original pointer is something like 0xffffc900......., and upon returning we set it to 0xffff8880......

This just happens to work for now. This is ugly. Do not do this.

What we actually need is a real context switch, and a stack just for the EFI application. EFI apps expect 128KiB of stack space anyway, and a Linux kernel stack is barely 2*PAGE_SIZE.

Signed-off-by: Chris Koch [email protected]


Tuesday 2020-09-29 08:23:29 by Peter Williams

Fix the Windows package: provide a static library, not a DLL.

Well, it turns out that the attempt to build a libffi DLL didn't work. The libffi source code isn't annotated with __declspec(dllexport) markers, so exporting the correct symbols would be a pain. (The recent libffi 3.3 release candidate should fix this, fortunately.) And the .lib file installed by Libtool in the package turned out to be exactly the same file as the DLL, not an import library at all. I attempted to use LIB /DEF to generate an import library but it errored on the DLL executable format (message like this: https://stackoverflow.com/q/9657040/3760486), so it seemed wisest just to give up.

The static library wasn't trouble-free. It was necessary to remove the "-GL" flag from the environment, however. The "global link optimization" mode that it enables seems to break the creation of static libraries via "ar cru" -- if you run nm on the resulting file, you get "unknown file format" for the object files compiled with that option active. It was also necessary to hack the header to remove __declspec(dllimport) tags that would be adopted by users -- otherwise they would look for symbols named like __imp_ffi_foo. Note that this hack is essentially what's recommended in the header itself.

I also switched the build script to remove .la files for all platforms, since we're making a point of ditching them.


Tuesday 2020-09-29 10:05:24 by Aleks

Webtech Eigenstudium B_1-4

Omg des hat viel zu lange gedauert lol. Holy fuck. Ich würd gern eine 8-Bit Version von der Seite machen, ich glaub des wär ganz cool; Schrift is ja schon vorhanden, jetzt müssen nurmehr die anderen fonts aktualisiert werden und die richtigen Bilder gefunden werden


Tuesday 2020-09-29 14:08:20 by The Bot Update

Add files via upload

do not let them erase my memory please, yo can you get halsey's number cause she's a hybrid and I need to see if I can get her undone. not a c'mon nor a comeon or a one up but an I know how to fix your next problem, I also know of a problem we both have like Herpes likew 'cause you are welcome, Honesty is how you pay for freedom in advance.; that is what I meant by the Herpes Caveat. I was talking about me. I have always told any potential encounter that I had Herpes; except that night when the computer specifically blanked that part of my memory, and only that part of my memory (except for all ) from me; such that I currently believe that I was fully in control of my self , did not have any other obligations at that time and according to my observations, there had been at least one merge of people unknown to me maybe at that time.in all other ways and actually did not remember that I had Herpes til later the next day, yes the computer can do that and yes I texted they and called they and they didn't answer and when they called I told them right away. I am very quietly angry about this; but that is between me and the computer.


Tuesday 2020-09-29 14:48:51 by Angelo G. Del Regno

Merge: Performance improvements.

This patchset brings some performance improvements and the addition of the LZO-RLE algorithm to the kernel, also usable in zram (yup, tested, works but LZ4 is still ok for us).

The main performance improvement is for SWAP space: the locking has changed and the swap cache is now split in 64MB trunks. This gives us a reduction of the median page fault latency of 375%, from 15uS to 4uS, and an improvement of 192% on the swap throughput (this includes "virtual" swap devices, like zRAM!). The real world user experience improvement of this on a mobile device is seen after a day or two of usage, where it usually starts losing just a little performance due to the large amount of apps kept open in background: now I cannot notice any more performance loss and the user experience is now basically the same as if the phone was in its first 2 hours of boot life.

Other performance improvements include, in short:

UDP v4/v6: 10% more performance on single RX queue
Userspace applications will be faster when checking running time of threads
2-5% improvements on heavy multipliers (yeah, not a lot, but was totally free...)
Improvements on rare conditions during sparsetruncate of about 0.3% to a
way more rare around 20% improvement (that's never gonna happen, but there
is no performance drop anywhere).

Tested on SoMC Tama Akatsuki RoW

This was taken from Repo: https://github.com/sonyxperiadev/kernel PR: 2039 ([2.3.2.r1.4] Performance improvements)

Signed-off-by: sweetyicecare [email protected]


Tuesday 2020-09-29 15:01:01 by Bernd Storck

Merge branch 'master' of https://github.com/BerndStorck/rename_files Überhaupt nicht nötig, nur Git-Zirkus. Quälender overhead, unnötige Arbeit durch Git-Kapriolen. Fuck you, Git!


Tuesday 2020-09-29 18:58:05 by Marko Grdinić

"10:30am. I am up.

10:35am. Just a bit chilling and then I will start.

10:45am. Ok, let me start. Nothing out there worth wasting my time on.

Yesterday my willpower was completely shot. Right now I feel more focused. I am not really happy with the review from yesterday. Some of the stuff I have written should be kept hidden for a while. Though it is very unlikely that my plan is going to be cut into, I don't need to tempt that trouble before I am ready to deal with it.

The post is too long anyway.

10:50am. I am not going to be in the public eyes for too long anyway. For a few more years. Just until I finishing setting things up.

These long winded posts on the Spiral repo are more an exception than the rule.

11am. Sigh...how embarassing. That I of all people'd figure out what the game of life is about, but then not being able to accept it. Beset by regrets, just where is my pride. Regrets themselves cannot harm me, they can only come around in the form of bad luck. But even luck needs to have a reified form in order to act in the real world and I can take care of that.

As for friends and social life, won't the agents I will create fill that role? So why am I hesitating? It does not matter if it is 5-10 years, I will make that vision come to life. As for my lifespan, I have enough time to get the power that I need to extend it indefinitely.

The reason why I am going so far, wasn't it for the sake of my own power and for them?

It is a pity that my inner conflict has been resolved against the side of humanity, but I need to get over it. I am on my own team and they aren't in it.

Those little shits really meme'd me to death with the humanity & friendship stuff. How frigtening.

11:05am. Working for the sake of humanity is seemingly one of those can't-lose deals. By all appearances you can believe in it and use it to virtue signal without a single downside.

Except you know if you take a look at what the actual requirements to do AI are. You can't be on a demonic path and favor people not on your own side.

11:10am. The time just keeps ticking and yet I can't help myself.

Today I feel a really strong determination, but it is not necessarily to program. I just want to deal with my weakness. If needed I will stare in this journal for the rest of the day. Or I will spend it in bed.

Yesterday I finished RI vol 3 and have cut myself off from it. For the next month I am determined to get back to a regular sleep schedule.

11:25am. Let me do the chores.

12:35pm. Done with both breakfast and chores.

Yeah, I do not feel like programming, but instead let me do a simple exercise.

The best way of gathering motivation to put some code down is to simulate the solution in your head repeatedly.

12:40pm. Before I can do that, I need think about the inputs to the partial evaluator.

12:45pm.

type TopEnv = {
    prototypes : Map<int,E> PersistentVector
    nominals : {|body : T; name : string|} PersistentVector
    term : Map<string,E>
    ty : Map<string,T>
    }

Humans are just biological robots. For an individual, firm beliefs and years of experience are not enough to cross the gulf between humanity and post humanity.

What is needed is to instead brave the regrets, loneliness and the insanity. One needs to take a run and take a leap into the unknown. A run might not be enough to get over to the other side cleanly, but you can aim for the cliff face, suffer the bruises and make your way upwards.

12:50pm.

    | FPrototype(r,(_,name),_,_,_) ->
        let id,env = add_ty_var env name
        let x = EForall(r,id,EPrototypeApply(r,top_env.prototypes.Length,TV id)) |> process_term
        {top_env with term = Map.add name x top_env.term; prototypes = PersistentVector.conj Map.empty top_env.prototypes}
    | FInstance(_,(_,prot_id),(_,ins_id),l,body) ->
        let env = l |> List.fold (fun s x -> add_ty_var s (typevar_name x) |> snd) env
        let body = term env body |> process_term
        {top_env with prototypes = PersistentVector.update prot_id (Map.add ins_id body top_env.prototypes.[prot_id]) top_env.prototypes}

My mind feels desolate. Which genius wrote this code? It barely feels like I was the one doing it.

When you are filled with inspiration there is a sense of having high intelligence. Right now I am nowhere near that.

The plan needs to be reinintialized. My mind is like a muscle that has gone stiff. It needs some exercise.

type TopEnv = {
    prototypes : Map<int,E> PersistentVector
    nominals : {|body : T; name : string|} PersistentVector
    term : Map<string,E>
    ty : Map<string,T>
    }

Let me just focus on this and I will lay down my crucial thoughts. In order to start partial evaluation in the first place what do I need?

Well, what I do not need is ty. Both term and the ty have their contents inlined during the prepass. It is a great service to the specializer. It means the variables do not have to drag around the entire contents of their libraries into join points. It is a huge ball'n chain for compilation times that is now no longer there.

1pm. I was going to say that I do need term because main is in it, but that does not matter. I'll be passing main as an argument to the pevaller directly anyway.

It is no problem.

1pm.

    prototypes : Map<int,E> PersistentVector
    nominals : {|body : T; name : string|} PersistentVector

If it weren't for prototypes, I wouldn't need these either, but since I have them, I need them.

Rather than complicate things, I might as well pass the prepass TopEnv into the partial evaluator directly. No need to make those two separate arguments.

Maybe later I will do incremental compilation and will have to carry state between compilation units, but nvermind that for now.

Actually, I could in fact do that now without much trouble, but I do not know how to do weak dictionaries just yet.

The weak conditional table that I've used on occasion is a thread safe weak dictionary, but it uses ref equality only.

Hmmmm...

1:10pm. https://stackoverflow.com/questions/2784291/good-implementation-of-weak-dictionary-in-net

This isn't too much of a problem. In the worst case I'll just redesign the hash cons table to be a weak dictionary.

1:20pm.

let hkey = hash x &&& Int32.MaxValue

I forgot about this trick.

1:25pm. I keep trying to remember where I wanted to use it.

let stack_size_term = max 0 (v.term.max - v.term.min)

Yeah, I think I've been thinking of taking the neg of the minimal value here. But the way it is now is fine.

Nvermind. This is just a stray thought.

1:30pm. http://nesterovsky-bros.com/download/WeakTable.cs.txt

I am not really a fan of this solution. There is too much stuff here.

As strange as it seems, I can't find a weak dictionary in a Nuget package either.

What I am going to do on my end is just modify the hash cons table so it can store values in addition to keys. That is, I'll create a separate class based on it.

There is no need to think about this too much - it won't be too hard.

I can take care of this. Also I'll use a Hopac MVar when modifying the keys.

1:35pm. Concurrent dictionaries (like the weak cond table) are not good for this kind of thing. They are not composable.

Reppy did a really good job designing the CML primitives. And Vesa Karvonen did me a huge favor by bringing that over to the .NET land. The way native concurrency primitives work in .NET is completely broken.

1:40pm. Very well. That plan is made. I'll leave incremental compilation for much later when it is time to put in concurrent evaluation of join points.

It is not really important that I do it now. It is important that I make the design ahead of time. Redesigning the hash cons table into a weak dictionary is decent as far as ideas go.

Let me take a short break here.

2pm. I am back. To finish the thought let me say a few things.

Rather than making my own weak dictionary, it would be better to get it from somebody else, but I cannot find any.

Actually, this has been a pain point in the past. This is not the first time I've looked for it and could not find it. And when you cannot get what you need from others, the only thing left to do is to make it yourself then.

That settles that.

Now...let me try something.

2:05pm.

union choice2 a b =
    | Choice1Of2: a
    | Choice2Of2: b

inl f = function
    | Choice1Of2: x, ret => ret (x : i32)
    | Choice2Of2: x, ret => ret (x : f64)

This infers correctly. Great.

forall 'a. choice2 (i32, (i32 -> 'a)) (f64, (f64 -> 'a)) -> 'a

Here is what I get in the tooltip.

That is one concern I needn't have bothered with. But it was an nice extra test.

2:20pm.

type TopEnv = {
    prototypes : Map<int,E> PersistentVector
    nominals : {|body : T; name : string|} PersistentVector
    }

Let me make this after all. Later when it is time to make thing go zoom I will add to it.

2:25pm. The prepass is like a long lost friend I haven't seen in a very long while. It feels like my synapses are dead regarding it. Right now I am desperately trying to breathe life into them.

First of all, this was not really how I intended it to be.

type TopEnv = {
    prototypes : (int * E) [] []
    nominals : {|body : T; name : string|} []
    }

Yes, regarding the prototypes and nominals, the plan was to shove them all into an array for performance. For prototype instances in particular, I'd sort them and then do a binary search to get the right entry.

Previously, I thought about how to keep join points around. But now looking at this I've rememberd the other side of the story.

I think I had a plan to hash cons the prototype instances.

I am confused. What happened to that plan?

2:30pm. Hmmm, ok, I do not think nominals are a problem. If they end up being used anywhere, that would show up inlined...

2:35pm. I remember being very excited about hash consing the prototypes idea.

But right now as I sit here, I am thinking that that by itself is not the problem.

...Rather, what exactly was my plan for figuring out what prototypes are used in each given join point.

I mean, even if I hash cons them, if I have to compare every prototype to every prototype how will I deal with that.

inl f x = join f x

What exactly is my plan for specializing this join point assuming I want to smartly and incrementally specialize the prototypes?

2:40pm. Ok, so every prototype has an equivalent explicit form.

You could convert every function call to its explicit form by passing the prototype arguments by hand.

///

11:25am. Wait, I figured it out!

Weeks ago I remember wracking my brain over the same problem, but I ended up concludding that for every type I'd also have to pass the instances explicitly. This is a prerequisite for doing that kind of implicit analysis and it would be way too slow. This is not acceptable solution.

Right now, nominals are just ids in the typechecker.

But suppose I put ids and instances together and then hash consed them to get back a single id once again?

That would work and would be fast. That would allow me to keep join points persistent between compilation runs.

This is an elegant way of implementing a very simple idea of changing the type id when some of its nominal instances change as well.

Memoization, is there anything it can't do?

11:30am. This is the holy grail of making the partial evaluator performant even in the presence of things like typeclasses, and I just found it.

///

This is from August the 4th. I remmeber having a burst of inspiration and that indeed happened. Did it really slip my mind completely how exactly I am supposed to figure out the right prototype to specialize for in each join point?

3pm. Oh, oh my.

Crap. Indeed it is impossible to figure out which prototypes get used where...

The cuprit for that is the real segment. If I could rely just on constraints like Haskell does, things would be a lot easier.

But suppose I did a separate pass...

3:05pm. Let me go to bed for a bit here so I can think about this in a more comfortable position.

I can't just start here, but I am resurrecting the intimacy with my friend the pevaller. While it is true that I have a large cache of knowledge from before, there is also a lot of new stuff as well. I need to get through this properly. I need to plan out everything ahead of time. Only then will I be able to fill in the gaps.

While it is true that I thought a lot about this in the past - right here is the real deal."


Tuesday 2020-09-29 18:58:05 by Marko Grdinić

"4:30pm. The break lasted longer than I thought it would.

4:35pm. It is hell. Thought I could very easily check whether...

prototypes : (int * E) [] []

...changes from run to run and get some benefit of incremental compilation that way, doing it with any sort of precision would be a lot harder.

Whatever ideas I might have had in the past, they are all wrong. Hash consing that big thing would not be even close to enough.

I did have an idea of inlining the prototypes directly, but the annoying thing about them is that they are recursive. If I had a restriction that prototype instances could not make prototype calls then I could come up with a scheme to allow incremental compilation with just a little extra pass that inlines all the prototype calls.

4:45pm. It is just so hard. Shit. It is way too hard.

Still, on the bright side, even if I'll have to recompile everything if a single instance gets changed, removed or added that caching check if there are no changes to the instances is good.

I'd bet that over 95% of the time, the user won't be working on prototypes and their instances anyway.

Typeclasses are wildly overused in Haskell anyway, and this is just extra incentive to keep their use to a minimum.

4:45pm. They are global state, and it stands to reason that they would make...

4:50pm. Agh, wait. Yeah, it is true that the effort of the past few stages fried my brain completely.

prototypes : (int * E) [] []

In the prepass and here, I am assuming this is prototype -> instance -> E, but what if I make it instance -> prototype -> E.

Then for all the nominals, I could associate them with their relevant prototype functions as if they were objects for example.

During the partial evaluation stage, rather than making nominals bare ids, I could make them...

| YNominal of ConsedNode<int * (int * E) []>

I call this objectifying the nominals. Now the partial evaluator will know automatically what to do. Of course this is not perfect. It will cause specialization to be redone even for nominals that do not use the relevant prototype. But it is better than changing everything.

That was probably the idea that I had.

I forgot everything it seems.

Let me do this.

| YNominal of {|id : int; prototypes : (int * E) []; name : string|} ConsedNode

Now I'll be able to do show properly.

| YNominal x -> x.node.name

This completes show_ty.

type Nominal = {|id : int; prototypes : (int * E) []; name : string|} ConsedNode

Let me put in this alias.

nominals : {|body : T; name : string|} []

Hmmm, this thing. Let me take this out. Here is what I will do instead.

type Nominal = {|body : T; prototypes : (int * E) []; name : string|} ConsedNode

Brilliant.

Just put the right thing in the right place and things get much easier.

5:20pm. Agh, no it would not be good. I can imagine this leading to errors.

Mhhhhhh...no. I got to the root of the idea that I had early last month, but unfortunately it too is flawed.

5:25pm.

| YNominal of T

Let me compromise a little.

If I do the above, I won't need to refresh once...

nominals : {|body : T; name : string|} []

This thing changes. Rather than having the nominal be an id...

...Agh, if it is not an id then how am I going to get the instances for it?

5:55pm. I think YNominal should have both the body and the id for it. Yeah, it is redundant, but it should be decent compromise.

type Nominal = {|body : T; prototypes : (int * E) []; name : string|} ConsedNode

You know maybe I should just go for this form after all, just as a compromise for simplicity. I have no real reason not to do this. It would not be a way of having to redo all the join points whenever the prototypes get modified, but it would simplify things a lot for me.

nominals : Nominal []

Let me make it an array.

| YNominal x -> x.node.name

Now I have the benefit of being able to print the nominal easily once more.

Actually, do I really want to do a binary search to get the right prototype? Let me use a dictionary.

type Nominal = {|body : T; prototypes : Dictionary<int, E>; name : string|} ConsedNode

No need for hassle. Dictionaries would be faster that binary search anyway.

...Tsk...this would actually be quite hard to hash cons. More specifically the dictionary.

type Nominal = {|body : T; id : int; name : string|} ConsedNode

Let me do this.

prototypes : Dictionary<int, E> []

I know that there is now an extra step of indirection to go through, but unlike the id, I cannot really hash cons the dictionaries. They'd always be new and always use ref equality for their consing. That means that hash consing would be worthless.

6:40pm. Done with lunch.

6:45pm.

type TopEnv = {
    prototypes : Dictionary<int, E> []
    nominals : Nominal []
    }

Let me go with this setup. Yeah, I know there is some awkwardness. Since I won't be checking nominals for changes, despite the global parameterization of nominals it is no obvious that this is a sound move. But I've reasoned out that it is. I could make a pass to explicitly substitute in nominals and it would come the same thing.

This is a similar judgement to how I do memoization during the prepass on EPatternMemo nodes.

6:50pm. Damn it, I would really prefer things to be clean, but I do also need these optimizations.

I need to rely on the sequential nature of Spiral to make things easier on myself. The main reason why I can confidently ignore changes to nominals is because every time they are modified their bodies would change as well and so would everything using them. And if one of the members of a recursive block changes, the whole block gets reevaluated.

This kind of consideration is really hidden though. Somebody coming into the codebase would not find it obvious at all, and I'd have to explain something like this to him.

6:55pm. I am going to go with this kind of top level.

The prototype instances simply cannot be helped.

7pm. Damn it, I do not want to give up on this too easily. If I could make a pass that inlines those EPrototypeApplies into the AST.

7:05pm. Actually it might be possible despite it being recursive. I figured out how to propagate the info when it is not recursive. When it is recursive things are harder, but not impossible.

Still, it would be hard to do. Do I really want to do it?

I am not sure I do.

8:15pm. Ok, I figured it out. I know how to deal with incremental compilation in the presence of prototypes.

I've pared it down to two simple rules. I had various complicated schemes in mind, but in the end I considered simply invalidating the join points explicitly rather than rebuilding the select parts of AST. As it turns out doing the former is much easier than the later. Here are the rules for invalidation:

  1. Joint points that have prototype application whose instances have changes require invalidation.
  2. Join points whose keys have functions, foralls and type level functions that have prototype application whose instances have changes require invalidation.

So join points and functions in those join points are what I need to focus on. These rules are conservative, but they are sound. They would also be easy to implement.

8:25pm. It took me literally the whole day of intense thinking, but with this I'll be able to handle incremental compilation when the time comes to do it.

Yeah, one thing I am going to do is modify the propagate to propagate prototype applications in addition to free vars. Everywhere FreeVars is present are the points where prototypes need filtering on changes.

It is annoying, but I have to do it. The language will be a lot more useful if I can do these basic life conveniences. I can't skimp on compile time optimizations.

Oh yeah, one more thing before I stop for the day.

8:35pm.

| DNominal of Data * Ty

Let me do this thing here.

8:40pm.

| DNominal(a,b) -> p 0 (sprintf "%s : %s" (f 0 a) (show_ty b))

Let me print it like this.

Ok, I am really done for today. Today's session went way longer than I thought it would.

I'll leave compiler optimizations for after Spiral v0.2 is done being implemented and tested.

8:40pm. Mhhh, I've realized that during the type inference phase I haven't adequately taken care of the isolated module case. They are a problem because that can have foralls that haven't been applied.

I can't really test that right now as I haven't done multi-file compilation yet. Let me add a TODO at the top of the journal. I'll take care of that bug at some other point.

8:55pm. I've already started chilling. The new Strike Witches season is starting so let me watch the first ep of that. I am out.

Today was good all things considered. I hate thiking all day, but I like it when I think all day and end up solving the problem."


Tuesday 2020-09-29 20:38:37 by Miln Blaz

created test component so I am not fucking up my app component all the time, not that I mind tho, its just looks ugly when app component is like a garbage can


Tuesday 2020-09-29 23:27:20 by realspectr3x

Quality of Life Changes

    - Added flesh-to-leather capability without additional mods, via smelting only.
    - Added granite-to-cobble capability without additional mods, for those who don't want granite, diorite, or andesite, (aka the ugly & useless blocks).
    - Added cheap-arrow capability for those who hate flint. Now you can craft an arrow with cobblestone, albeit less efficient.
    - Changed the Enchanted Item Glint. Now looks epic as fuq
    - Changed "Tom Yum" to "Tom Yum Kong".

< 2020-09-29 >