Skip to content

Latest commit

 

History

History
530 lines (371 loc) · 30.3 KB

2020-08-20.md

File metadata and controls

530 lines (371 loc) · 30.3 KB

< 2020-08-20 >

2,557,320 events, 1,190,519 push events, 1,932,120 commit messages, 146,307,850 characters

Thursday 2020-08-20 02:05:33 by Riley Scott Jacob

Add remote storage support

Works... sort of. The program itself works fine, mostly. There is that annoying behavior where sshfs seems to sometimes just hang forever. I have no idea what causes it, and it is driving me seriously insane -- especially because I can find only one reference to similar behavior online.

When it does work, though, there does seem to be one bug which occurs truly in-program: the subfolders don't seem to be detected by UI_Project. That is to say, if we create a remote storage location, and then go to the projects context, nothing is visible. If we create a new project, everything works mostly normally (if I check the location in the filesystem manually, the folder is created correctly and where expected), but for some reason nothing is visible in the folder list model. This remote nonsense is super frustrating.


Thursday 2020-08-20 04:16:28 by craig[bot]

Merge #51894 #53076

51894: kvserver: don't unquiesce on liveness changes of up-to-date replicas r=nvanbenschoten a=nvanbenschoten

Mitigates the runtime regression in #50865. Implements the proposal from cockroachdb/cockroach#50865 (comment).

In #26911, we introduced the ability to quiesce ranges with dead replicas that were behind on the Raft log. This was a big win, as it prevented a node failure from stopping any range with a replica on that node from quiescing, even if the range was dormant. However, in order to ensure that the replica was caught up quickly once its node being brought back on-line, we began aggressively unquiescing when nodes moved from not-live to live.

This turns out to be a destabilizing behavior. In scenarios like #50865 where nodes contain large numbers of replicas (i.e. O(100k)), suddenly unquiescing all replicas on a node due to a transient liveness blip can bring a cluster to its knees. Like #51888 and #45013, this is another case where we add work to the system when it gets sufficiently loaded, causing stability to be divergent instead of convergent and resulting in an unstable system.

This fix removes the need to unquiesce ranges that have up-to-date replicas on nodes that undergo liveness changes. It does so by socializing the liveness state of lagging replicas during Range quiescence. In dong so through a new lagging_quiescence set attached to quiescence requests, it allows all quiesced replicas to agree on which node liveness events would need to result in the range being unquiesced and which ones can be ignored.

This allows us to continue to provide the guarantee that:

If at least one up-to-date replica in a Raft group is alive, the Raft group will catch up any lagging replicas that are also alive.

For a pseudo-proof of this guarantee, see cockroachdb/cockroach#50865 (comment).

A secondary effect of this change is that it fixes a bug in quiescence as it exists today. Before this change, we did not provide this guarantee in the case that a leader was behind on its node liveness information, broadcasted a quiescence notification, and then crashed. In that case, a node that was brought back online might not be caught up for minutes because the range wouldn't unquiesce until the lagging replica reached out to it. Interestingly, this may be a semi-common occurrence during rolling restart scenarios.

This is a large improvement on top of the existing behavior because it reduces the set of ranges with a replica on a given dead node that unquiesce when that node becomes live to only those ranges that had write activity after the node went down. This is typically only a small fraction of the total number of ranges on each node (operating on the usual assumption that in an average large cluster, most data is write-cold), especially when the outage is short. However, if all ranges had write activity and subsequently quiesced before the outage resolved, we would still have to unquiesce all ranges upon the node coming back up. For this reason, the change is primarily targeted towards reducing the impact of node liveness blips and not reducing the impact of bringing nodes back up after sustained node outages. This is intentional, as the former category of outage is the one more likely to be caused by overload due to unquiescence traffic itself (as we see in #50865), so it is the category we choose to focus on here.

Release note (performance improvement): transient node liveness blips no longer cause up-to-date Ranges to unquiesce, which makes these events less destabilizing.

@petermattis I'm adding you as a reviewer because you've done more thinking about Range quiescence than anyone else. But I can find a new second reviewer if you don't have time to take a look at this.

53076: sql/rowcontainer: avoid over-allocating when rowCapacity provided r=nvanbenschoten a=nvanbenschoten

This commit adjusts RowContainer's handling of the optional rowCapacity parameter to be more efficient and more closely aligned with what a user of it wants. Before this change, the only effect that rowCapacity had was pre-allocate a sufficient number of chunks during the first call to AddRow to accommodate the specified capacity. This was lacking in two ways:

  1. the RowContainer still allocated 64 row chunks, so a rowCapacity above 64 rows would still perform multiple heap allocations. The only benefit rowCapacity has was avoid resizing the chunk slice itself, which provided minimal actual benefit.
  2. the RowContainer still allocated 64 row chunks, so a rowCapacity below 64 rows would still allocate a chunk large enough to accommodate 64 rows.

This commit addresses these deficiencies by adjusting the meaning of the rowCapacity parameter. Now when provided, the value is used to configure the size of chunks that are allocated within the container such that if no more than the specific number of rows is added to the container, only a single chunk will be allocated and wasted space will be kept to a minimum. This is a measurable win for the parameter's only user, valuesNode.

This improvement shows up prominently in single-node kv0 benchmarks like kv0/enc=false/nodes=1 and kv0/enc=false/nodes=1/cpu=32. Before this change, the 64-row chunk allocation in valuesNode's RowContainer was the second most expensive (by total bytes allocated, alloc_space) allocation in the system. Each 64-row chunk resulted in a 2 KiB heap allocation. Added together, this was responsible for 5.64% of all memory allocated by Cockroach while running the workload. With this change, the chunk size for the RowContainer used by valuesNode can be as small as 1-row, which translates to 32B heap allocations. This drops the aggregate cost of this allocation down to just 0.19% of all memory allocated by Cockroach.

Before:

Screen Shot 2020-08-19 at 3 17 51 PM

After:

Screen Shot 2020-08-19 at 3 17 59 PM

The impact on top-line throughput for the workloads is measurable:

name                   old ops/sec  new ops/sec  delta
kv0/enc=false/nodes=1   12.2k ± 4%   12.6k ± 4%  +3.79%  (p=0.002 n=10+10)

name                   old p50(ms)  new p50(ms)  delta
kv0/enc=false/nodes=1    4.91 ± 4%    4.70 ± 0%  -4.28%  (p=0.001 n=10+8)

name                   old p99(ms)  new p99(ms)  delta
kv0/enc=false/nodes=1    19.3 ± 3%    18.9 ± 6%    ~     (p=0.337 n=10+10)

Release note (performance improvement): a large heap allocation performed during INSERT statements was removed, resulting in an increase to throughput for single-row INSERT statements.

Co-authored-by: Nathan VanBenschoten [email protected]


Thursday 2020-08-20 04:36:37 by Minchan Kim

mm: use put_page() to free page instead of putback_lru_page()

Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and easy fork fail.

The problem was fragmentation caused by zram and GPU driver mainly. With memory pressure, their pages were spread out all of pageblock and it cannot be migrated with current compaction algorithm which supports only LRU pages. In the end, compaction cannot work well so reclaimer shrinks all of working set pages. It made system very slow and even to fail to fork easily which requires order-[2 or 3] allocations.

Other pain point is that they cannot use CMA memory space so when OOM kill happens, I can see many free pages in CMA area, which is not memory efficient. In our product which has big CMA memory, it reclaims zones too exccessively to allocate GPU and zram page although there are lots of free space in CMA so system becomes very slow easily.

To solve these problem, this patch tries to add facility to migrate non-lru pages via introducing new functions and page flags to help migration.

struct address_space_operations { .. .. bool (*isolate_page)(struct page *, isolate_mode_t); void (*putback_page)(struct page *); .. }

new page flags

PG_movable
PG_isolated

For details, please read description in "mm: migrate: support non-lru movable page migration".

Originally, Gioh Kim had tried to support this feature but he moved so I took over the work. I took many code from his work and changed a little bit and Konstantin Khlebnikov helped Gioh a lot so he should deserve to have many credit, too.

And I should mention Chulmin who have tested this patchset heavily so I can find many bugs from him. :)

Thanks, Gioh, Konstantin and Chulmin!

This patchset consists of five parts.

  1. clean up migration mm: use put_page to free page instead of putback_lru_page

  2. add non-lru page migration feature mm: migrate: support non-lru movable page migration

  3. rework KVM memory-ballooning mm: balloon: use general non-lru movable page feature

  4. zsmalloc refactoring for preparing page migration zsmalloc: keep max_object in size_class zsmalloc: use bit_spin_lock zsmalloc: use accessor zsmalloc: factor page chain functionality out zsmalloc: introduce zspage structure zsmalloc: separate free_zspage from putback_zspage zsmalloc: use freeobj for index

  5. zsmalloc page migration zsmalloc: page migration support zram: use __GFP_MOVABLE for memory allocation

This patch (of 12):

Procedure of page migration is as follows:

First of all, it should isolate a page from LRU and try to migrate the page. If it is successful, it releases the page for freeing. Otherwise, it should put the page back to LRU list.

For LRU pages, we have used putback_lru_page for both freeing and putback to LRU list. It's okay because put_page is aware of LRU list so if it releases last refcount of the page, it removes the page from LRU list. However, It makes unnecessary operations (e.g., lru_cache_add, pagevec and flags operations. It would be not significant but no worth to do) and harder to support new non-lru page migration because put_page isn't aware of non-lru page's data structure.

To solve the problem, we can add new hook in put_page with PageMovable flags check but it can increase overhead in hot path and needs new locking scheme to stabilize the flag check with put_page.

So, this patch cleans it up to divide two semantic(ie, put and putback). If migration is successful, use put_page instead of putback_lru_page and use putback_lru_page only on failure. That makes code more readable and doesn't add overhead in put_page.

Comment from Vlastimil "Yeah, and compaction (perhaps also other migration users) has to drain the lru pvec... Getting rid of this stuff is worth even by itself."

Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Minchan Kim [email protected] Acked-by: Vlastimil Babka [email protected] Cc: Rik van Riel [email protected] Cc: Mel Gorman [email protected] Cc: Hugh Dickins [email protected] Cc: Naoya Horiguchi [email protected] Cc: Sergey Senozhatsky [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected] Git-commit: c6c919eb90e021fbcfcbfa9dd3d55930cdbb67f9 Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git Change-Id: I2c00623996f8017b9c84ed12fcc4d85290a1712c Signed-off-by: Vinayak Menon [email protected]

Signed-off-by: Paul Reioux [email protected]


Thursday 2020-08-20 08:35:54 by [email protected]

updated abbybot

updated boorusharp, mysql, and nhentai packages. Added a music player to abbybot. Added music channel checking to see where you want to hear the music Moved mentions commands to its own capi instance to prevent args from overiding eachother accidentally renamed the containspaper cs file broke the emved builder for sending images commands now fall back through multiple boorus when a user sends a message abbybot's heart springs to life. If nobody talka to abbybot her heartbeat will die out and she will be eternally depressed. your activity on discord should keep abbybot happy! so either talk in a server she is in or dm her! added a youtube downloader

toto: move abbybot's heart code into it's own easier to manage location make abbybot be able to tell you good morning or good night allow abbybot to tell you when the next episode of an anime comes out Make it possible to stream the latest pictures of abby directly into a channel fix the gelbooru embed system

clean up code somehow


Thursday 2020-08-20 09:03:15 by Marko Grdinić

"10:30am. I am up. Let me chill for a while and I will try to get some work done. I've been reading manga till late, but on the plus side I slept well.

I need to make up my mind. Either I get back to leveling up my programming skills, or I pretty much stay a loser for the rest of my life. I can either get to work, or wallow in my regret.

When it comes to doing intelligence augmentation for humans, the best way to do it for a human is to just let the machine do the task. You do not get anything better than that. So if I want to get smarter, this is what I should be practicing.

Necromutation is really a high level spell. A sensible wizard would start off by desiring something smaller.

10:35am. I myself, even though I am trying to be rational about it have too much of a shot for the stars mentality. It is really amazing when you think about it that I spent all this time on the foundations in spite of that. For some reason the people working sensibly are not that interested in it.

It is a clown world. None of the hierarchies make a lick of sense in it. Weak is strong, and I am perenially confused.

10:40am. Forget it. There is no need to accept the Singularity. The path towards inhumanity should be gradual.

I could not accept becoming a programmer back then because I did not see it as power, but what did I expect? That I should have direct access to my own brain? That would have just gotten me killed.

Before I master editing myself, how about I master editing others first? Much safer to do it this way.

10:45am. It is this anti-social tendency to not feel like the power is yours unless you can hold it, that has been killing me from day one. If I had this belief, I would have been so much stronger.

I should be aiming to be a wizard. Unlike dumb peasants shoveling dirt, they only get stronger with age. At the age of 80 peasants are getting ready to be put in a casket, but wizards are readying their undead loli transformation.

10:55am. Let me do my morning reading and I will start. I am not too inspired, so I think once it is time to do so that I will just look at the code that I've written previously."


Thursday 2020-08-20 11:50:51 by Marko Grdinić

"12:40pm. Let me finally start. The long term might be long, but will also soon be here.

Again and again it gets tedious, and every time I will reinvent myself to bind myself more tightly to the work that I do. My programming should not be outside me.

12:45pm. For a while from here on out, I will get rid of all the distractions. Every time I start reading stuff on the side, it eats away my motivation to do programming. What desire I build up storming, it gets frittered away as soon as I indulge my hobbies.

12:55pm. Ok...I am thinking. It is ironic of all the problems I have to deal with, that my focus now is entirely on the hover provider.

Let me describe the problem.

You have a piece of code, you hover the cursor over a variable, then you wait for the type info to appear. The hover provider is what sends a message to the server to get this information.

The problem is - what if typechecking takes a long time? In that time the user could move the cursor somewhere else.

But with distributed communication that I am doing it is not easy to tell the server that this has occurred. On the VS Code side it is expecting a promise. And the F# side, it will be waiting for the typechecking to finish.

Even if I used a router socket, I can't actually switch cleanly because Hopac is not integrated with NetMQ. I can't change hovers until the current one finishes.

        match x with
        | ProjectFileOpen x ->
            match config x.spiprojDir x.spiprojText with Ok x -> [||] | Error x -> x
            |> Json.serialize
        | FileOpen x ->
            (let res = IVar() in Ch.give (server_blockizer x.spiPath) (Req.Put(x.spiText,res)) >>=. IVar.read res)
            |> run |> Json.serialize
        | FileChanged x ->
            (let res = IVar() in Ch.give (server_blockizer x.spiPath) (Req.Modify(x.spiEdits,res)) >>=. IVar.read res)
            |> run |> Json.serialize
        | FileTokenRange x ->
            (let res = IVar() in Ch.give (server_blockizer x.spiPath) (Req.GetRange(x.range,res)) >>=. IVar.read res)
            |> run |> Json.serialize

I mean, look at this. I am using a router socket here, but there is no difference between it and a response socket in this context. All the cases need to finish before the program can move on.

1pm. The way VS Code API is organized, it is assumed that all the providers are basically remote procedure calls. That means, they behave much like regular function calls in that the send some data, and then get a return. They aren't async message sends.

1:05pm. I could have more flexibility if I bring in the thread safe NetMQ queue.

That would make things complicated. Instead, let me simplify things to the limit.

use sock = new RouterSocket()

Why don't I in fact make this a response socket. Why not fully embrace the notion that the VS Code plugin (written in TS) and the F# language server are one whole and that the message sends are really remove function calls.

Then for the hover provider, why don't I put in a timeout of something like 0.25s. If typechecking takes longer than that, the VS Code side can just resend the message. Problem solved.

Using time to transmit information is really an elegant solution. I haven't really ever used it before.

Actually the same thing goes for parsing. Right now I am bothered how the range provider would need to wait for it to finish. Maybe the user might end up writing 1000 lines functions. I do not want him to wait until parsing is done to get semantic highlighting. I should be using timeouts for this as well.

Well, I'll deal with that particular issue later. Right now, I do not have any way to retrigger the range provider. I'll leave the parsing as it is.

1:20pm. The way it is currently structured is not a big problem.

...I am really getting strong deja vu as I imagine all of this. Suppose I separated out parsing errors from token errors. And suppose that I just send parser errors separately as soon as a particular block finishes.

Then I could easily implement parser timeouts. Even if I do not have a way to retrigger the range provider, it might not really matter since moving the viewport will cause it to do that.

1:25pm. Let me take a short break here.

1:45pm. Let me step away from the screen for a bit. Right now I am thinking about the parser. The way editor support works for it should be redone. The parser is one thing that should have proper timeouts.

I need to do it properly from the top. Then once I do that, I will be ready to tackle typechecking in a mock fashion. The same goes for hover info.

I need to think about this. Yeah, there is no doubt that if I were experienced in this, I would be able to do this much faster. In 2017, Spiral took me a sick amount of time. I can't get impatient with v0.2. It will take whatever it needs to take."


Thursday 2020-08-20 14:24:53 by Marko Grdinić

"3:30pm. I spent some time in bed once again. I think that TC, prepass and the partial evaluator are all in some state of near completeness.

My problem right now is that I am burnt out.

Forget finishing the TC by the end of the month. I do not feel like it at all anymore. I do not even want to think about those 3 phases.

Forget the compiler.

Instead my sole goal will be to just fiddle with editor support. I want something simple. I do not want to carry a huge load anymore. All those great goals can wait.

From now till the end of the month, let's say I will be having a semi-vacation. I'll just focus on simple stuff. Like changing that router socket to a response socket. Or redoing the parser parts so they are more async. And so on what I've been thinking today.

I do not want to think too far into the future. If I had the Inspired Desire, I just flip a switch and would have an endless capability to work. But now, I'll lower my pace.

Let me take a break and then I will see if I can get something done today.

I'll just aim to modularize the parser.

4pm. Let me start. Forget the past. Forget the future. Remember that you are much stronger when you can give even trivial things time. If you were playing a game, your first goal would not be to finish it as fast as possible, but to have fun. Programming should be the same way.

First of all, let me run the server and then the plugin. I want to see if everything still works.

It has been a while since I did this. It is good to remind myself of the fruits of my labor.

let record_apply = sepBy1 apply (skip_op "@") |>> List.reduceBack (fun a b -> RawTRecordApply(range_of_texpr a +. range_of_texpr b,a,b))

It turns out the parser had red. Completely forgot about this.

4:05pm. Had to fix some examples. The tuples are now , even on the type level.

Yeah, this is good. After the effort of the past month my mind feels a waste. This is almost like rehabiliation for me.

Now that I've verified this works, let me change the router socket to a response socket.

I am going to make communication more neural like, with timing serving to indicate failure.

4:10pm. That this is good design is obvious - there is no way hacking things up so the language server is more directly responsive is right, but I've only had this idea, and made the decision to do this today. And now that I've accepted this as a design principle, that will make many other things a lot easier.

Let me replace that router socket.

let server () =
    let server_blockizer = Utils.memoize (Dictionary()) (fun (x : string) -> run (Blockize.server (IO.Path.GetExtension(x) = ".spi")))

    use sock = new ResponseSocket()
    sock.Options.ReceiveHighWatermark <- Int32.MaxValue
    sock.Bind(uri)
    printfn "Server bound to: %s" uri

    while true do
        // TODO: The message id here is for debugging purposes. I'll remove it at some point.
        let (id : int), x = Json.deserialize(sock.ReceiveFrameString())
        match x with
        | ProjectFileOpen x ->
            match config x.spiprojDir x.spiprojText with Ok x -> [||] | Error x -> x
            |> Json.serialize
        | FileOpen x ->
            (let res = IVar() in Ch.give (server_blockizer x.spiPath) (Req.Put(x.spiText,res)) >>=. IVar.read res)
            |> run |> Json.serialize
        | FileChanged x ->
            (let res = IVar() in Ch.give (server_blockizer x.spiPath) (Req.Modify(x.spiEdits,res)) >>=. IVar.read res)
            |> run |> Json.serialize
        | FileTokenRange x ->
            (let res = IVar() in Ch.give (server_blockizer x.spiPath) (Req.GetRange(x.range,res)) >>=. IVar.read res)
            |> run |> Json.serialize
        |> sock.SendFrame

Here it is. Now I no longer need to mess with return addresses.

Let me see if this works.

Yeah, it does. Nice.

            (let res = IVar() in Ch.give (server_blockizer x.spiPath) (Req.GetRange(x.range,res)) >>=. IVar.read res)
            |> run |> Json.serialize

To be honest, I thought I would come up with something smarter than using run, but this is the best I can do it seems.

I am rather restrained in the way I can design the language server. The way the VS Code API is designed, plus the way ZeroMQ/NetMQ works railroad me heavily. It would be possible to deviate, but it would make a huge mess out of everything.

4:20pm. The next goal is to modularize the parser.

This is a simple thing, but if I try carry that along with everything else, it makes me nauseous. Forget great goals. Just pretend you tending to your garden. This kind of task should be relaxing.

It all comes down to how you frame your own mission.

Let me commit here, since no doubt the next part will break some things."


Thursday 2020-08-20 14:35:44 by Andrew Emerick

First commit and first attempt at implementing ML models on 'real' data. Tried to hack together a reasonable answer in a couple hours (which is why the coding is really ugly). Have a few thoughts going forward on how to clean this up for this paricular model and other models in general, particularly when doing something where we generate a model to infer missing data (e.g. using a class to handle making sure all the data is cleaned appropriately and assiging sub-models (right term?) as attributes of that class). ALSO learned very quickly that the 'hard' part isn't really training the model. Its making sure the data is cleaned and prepared properly (even for when its mostly already pretty 'nice' like in this dataset). Clearly worthwhile spending more time learning significance of individual features and obvious correlations before blindly hitting things with a hammer.

Will try and improve this. But may just scrap it and use a less-popular / less-titanicy dataset as the thing I'll sink more time into


Thursday 2020-08-20 15:36:09 by J. W. Smith

Some more nip/tuck adjustments to org-ical export

Holy hell this is a nightmare to configure.

Half the problem is iCal itself, which is a fustercluck of a format. (Not that I could do much better myself; this is probably a nightmarish domain to get right.)

Another half of the problem is Google Calendar itself, which is... we'll not go into it.


Thursday 2020-08-20 21:30:08 by phorton1

INTERMEDIATE NON-WORKING VERSION - ALMOST does cross fades across a partially implemented single clip in single track AUDIO_FUCKING_NOISE - tried everything (again), turned off USB, reduced screen acccess, eliminated midi, tried serial text input THIS FUCKING SUCKS. I CANNOT WORK ON THIS ANY MORE The thing (w/OCTO) is making noise and I can't stop it. shitfuckshitfuckshitfuckshitfuck


Thursday 2020-08-20 22:13:00 by Sean Kavanagh

Major updates, including sexy af formation energy plots, allowed use of gunzipped vasprun.xml and LOCPOTS for chempot analysis and charge corrections, deal with spurious pymatgen POTCAR errors, add Lany-Zunger charge correction, everything in dope_stuff.py (easy transition levels, formation energies, pickling of results etc.) - a lot of good shit really. I should update the README and example notebooks to show off this stuff, but at this moment I'm unsure where this project will go (#fuckcoronavirusyeh). "If you want to be the best, you've got the beat the best, and the best is 'Blessed'" It is what it is


Thursday 2020-08-20 22:26:24 by Csillag Krisztián

Merge pull request #24 from csillagkrisztian/development

After a lot of frustration the app is online and working, I understand that all those pull requests were unnecessary, but my emotions got the best of me, please refrain from comparing this with a real life case :D Thank you!


< 2020-08-20 >