2,985,692 events, 990,158 push events, 1,455,781 commit messages, 92,441,899 characters
put tab-over mode out of its misery
this commit started its life with much more green.
having spent an entire day steadily implementing the blasted thing, it turns out that the sort of access content scripts have to page scopes is just not enough. perhaps some day in the future we can reconsider this after writing a TW plugin that can pick up on postMessage calls or something, but as it stands, there just isn't enough power in executeScript(). webserver mode is so good that the only part of this that honestly hurts is the wasted time.
don't write browser code, people
qcacld-3.0: Disable Werror
FUCK YOU
Signed-off-by: Kazuki Hashimoto [email protected]
Reduce result discount in conSize
Ticket #18282 showed that the result discount given by conSize was massively too large. This patch reduces that discount to a constant 10, which just balances the cost of the constructor application itself.
Note [Constructor size and result discount] elaborates, as does the ticket #18282.
Reducing result discount reduces inlining, which affects perf. I found that I could increase the unfoldingUseThrehold from 80 to 90 in compensation; in combination with the result discount change I get these overall nofib numbers:
Program Size Allocs Runtime Elapsed TotalMem
boyer -0.3% +5.4% +0.7% +1.0% 0.0%
cichelli -0.3% +5.9% -9.9% -9.5% 0.0%
compress2 -0.4% +9.6% +7.2% +6.4% 0.0%
constraints -0.3% +0.2% -3.0% -3.4% 0.0%
cryptarithm2 -0.3% -3.9% -2.2% -2.4% 0.0% gamteb -0.4% +2.5% +2.8% +2.8% 0.0% life -0.3% -2.2% -4.7% -4.9% 0.0% lift -0.3% -0.3% -0.8% -0.5% 0.0% linear -0.3% -0.1% -4.1% -4.5% 0.0% mate -0.2% +1.4% -2.2% -1.9% -14.3% parser -0.3% -2.1% -5.4% -4.6% 0.0% puzzle -0.3% +2.1% -6.6% -6.3% 0.0% simple -0.4% +2.8% -3.4% -3.3% -2.2% veritas -0.1% +0.7% -0.6% -1.1% 0.0% wheel-sieve2 -0.3% -19.2% -24.9% -24.5% -42.9%
Min -0.4% -19.2% -24.9% -24.5% -42.9%
Max +0.1% +9.6% +7.2% +6.4% +33.3%
Geometric Mean -0.3% -0.0% -3.0% -2.9% -0.3%
I'm ok with these numbers, remembering that this change removes an exponential increase in code size in some in-the-wild cases.
I investigated compress2. The difference is entirely caused by this function no longer inlining
WriteRoutines.$woutputCodes = \ (w :: [CodeEvent]) -> let result_s1Sr = case WriteRoutines.outputCodes_$s$woutput w 0# 0# 8# 9# of (# ww1, ww2 #) -> (ww1, ww2) in (# case result_s1Sr of (x, ) -> map @Int @Char WriteRoutines.outputCodes1 x , case result_s1Sr of { (, y) -> y } #)
It was right on the cusp before, driven by the excessive result discount. Too bad!
Metric Decrease: T12227 T12545 T15263 T1969 T5030 T9872a T9872c Metric Increase: T13701 T9872d
Use fns instead of relying on named! macro (#1)
This will help make our life easier with recursion and such. Most of the struggle is figuring out the return types of the different constructs nom
gives us. My understanding is that, when you have a macro named my_macro
, say, and it takes a single argument and returns a function, doing my_macro!(some_arg)
both expands the macro and calls the resulting function it produces. However, doing my_macro(some_arg)
(without the bang!
) simply returns the function it produces, meaning something like this my_macro(some_arg)(some_other_arg)
could make sense (and that's how this PR is using take_while
among others).
EDIT: This style is very reminiscent of do
notation in Haskell. The ?
operator suggested by the God @gabaconrado gives us a short circuiting way of piping the the result of parsers to the next ones.
The next step is to implement variadic arguments and expressions as operators (the first argument to a list doesn't have to be a char
, it can be an entire expression). Also, not every atom is a Num, so we'll make that into a rich enum as well.
Add the ability to add tasks to the Butler (#3)
Here we add some tasks to the Butler. Man's gotta know what to keep track of. This is a simple "create a card on Trello"; so nothing fancy. But there are some design decisions I am slowly realizing needs to be made.
- Storage: currently, this is in a flat file as JSON, but it could be something more elaborate as time goes on and the entity space grows.
- TDD: TBH, IDGAF. I'd rather build this out first... then create a test
harness later. That's just how my brain works. — although, I had to
write tests for the Store at some point cos mans didn't understand what
Crystal was doing behind the scenes. Turns out all the test you see for
the Store module was to prefix the expression in
Store#unique?
with ! - Sugar:
task add
is a backward way of speaking. I realize this is how programmers speak, but Butlers are more polished. I would have to introduce some sugar to this. What I currently don't know is how this plays with the help text. - Exceptions: I was just an ass here. I don't know why I tried to maintain "errors" and "exceptions"
Writing to the datastore is an atomic task. This is could become a problem if the number of tasks become ginormous cos I'm handling this in one process. But when we get to that bridge, we'd cross it. i cannot come and go and kill myself. Here's the note that I am aware of it.
At this point, I can add tasks. But there's nothing I can do with this ability because Butler neither knows how to show me these tasks, nor how to interact with them.
Worklog:
- Put tests in modules; don't litter the global scope
- Better routing + Task instruction
- Fixed logger write bug
- Add the Task entity
- Add a minimal store
- Add "CreateTask"
- Fixed tests setup and teardown
Change to a different TFAR channel
Fuck you Mitch c:
govind123456789/Data-Analysis-Visulalization-of-Different-Dataset-Crimes-Covid19-Market-predictions-Housing-Data-@7b7032544a...
Add files via upload
Context
This dataset contains complete information about various aspects of crimes happened in India from 2001. There are many factors that can be analysed from this dataset. Hope this dataset helps us to understand better about India.
Inspiration:
There could be many things one can understand by analyzing this dataset. Few inspirations for you to start with.
What is the major reason people being kidnapped in each and every state? Offenders relation to the rape victim Juveniles family background, education and economic setup. Which state has more crime against children and women? Age group wise murder victim Crime by place of occurrence. Anti corruption cases vs arrests. Which state has more number of complaints against police? Which state is the safest for foreigners?
Coronavirus disease 2019 (COVID-19) time series listing confirmed cases, reported deaths and reported recoveries. Data is disaggregated by country (and sometimes subregion). Coronavirus disease (COVID-19) is caused by the Severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2) and has had a worldwide effect. On March 11 2020, the World Health Organization (WHO) declared it a pandemic, pointing to the over 118,000 cases of the coronavirus illness in over 110 countries and territories around the world at the time.
This dataset includes time series data tracking the number of people affected by COVID-19 worldwide, including:
confirmed tested cases of Coronavirus infection the number of people who have reportedly died while sick with Coronavirus the number of people who have reportedly recovered from it
Big Data Mart Sales Problem
The data scientists at BigMart have collected 2013 sales data for 1559 products across 10 stores in different cities. Also, certain attributes of each product and store have been defined. The aim is to build a predictive model and find out the sales of each product at a particular store.
Using this model, BigMart will try to understand the properties of products and stores which play a key role in increasing sales.
That one fucking like
Honestly, I'm gonna' start commenting my code now. It took me way too long to remember the convoluted way in which I programmed this...
Do not rely on this. It's kinda bad but we're working on it
Yeah I know this commit message sucks :P
sched/core: Implement new approach to scale select_idle_cpu()
Hackbench recently suffered a bunch of pain, first by commit:
4c77b18cf8b7 ("sched/fair: Make select_idle_cpu() more aggressive")
and then by commit:
c743f0a5c50f ("sched/fair, cpumask: Export for_each_cpu_wrap()")
which fixed a bug in the initial for_each_cpu_wrap() implementation that made select_idle_cpu() even more expensive. The bug was that it would skip over CPUs when bits were consequtive in the bitmask.
This however gave me an idea to fix select_idle_cpu(); where the old scheme was a cliff-edge throttle on idle scanning, this introduces a more gradual approach. Instead of stopping to scan entirely, we limit how many CPUs we scan.
Initial benchmarks show that it mostly recovers hackbench while not hurting anything else, except Mason's schbench, but not as bad as the old thing.
It also appears to recover the tbench high-end, which also suffered like hackbench.
Tested-by: Matt Fleming [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Cc: Chris Mason [email protected] Cc: Linus Torvalds [email protected] Cc: Mike Galbraith [email protected] Cc: Peter Zijlstra [email protected] Cc: Thomas Gleixner [email protected] Cc: [email protected] Cc: kitsunyan [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar [email protected] Signed-off-by: Raphiel Rollerscaperers [email protected] Signed-off-by: prorooter007 [email protected] Signed-off-by: starlight5234 [email protected]