Skip to content

Latest commit

 

History

History
1203 lines (874 loc) · 127 KB

2021-05-06.md

File metadata and controls

1203 lines (874 loc) · 127 KB

< 2021-05-06 >

3,498,824 events, 1,516,346 push events, 2,472,046 commit messages, 191,461,393 characters

Thursday 2021-05-06 00:00:27 by MicrocontrollersDev

Merge pull request #1 from dreamhopping/dreamhopping-patch-1

fuck you cachhy


Thursday 2021-05-06 00:05:33 by lovesmuffins

buffs to Doom (#3113)

  • buffs to Doom

Switched around some of his talents. Brought doom damage down to level 15 as in regular dota level 20 is a huge boost in doom dmg but in oaa by level 20 most heroes have a lot more tanky items, moving to 15 gives doom back more of his timing. Removed magic resistance as its kinda useless in OAA and replaced with devour talent so doom can tempo more earlier on. Decent power spike at level 10 but nothing OP (in testing). Gave doom a pretty decent damage talent at level 20 to help warrant a cleave build. It was this or replacing his cleave talent it being useless. Perhaps with this damage talent we'll see some doom right click builds!

Buffed scorched earth damage a tad bit, nothing insane, small buffs to make him viable.

  • Update doom.txt

  • fix

  • Update npc_abilities_override.txt

  • Update doom_bringer.txt


Thursday 2021-05-06 01:41:12 by ChrisANG

Apocalypse angels

Fallen angels with a 2 square snakebite attack and a close range "prophecy" attack. They also summon aid all over the current dungeon level. -Try to teleport a knight's move away from you. -Prophecy attack --Insight-dependent chance of taking effect. --May: ---Inflict Doubt ---Inflict god anger and smiting ---Lower sanity ---Lower wis ---Lower moral ---Cause one of the 4 rider attacks. ---Lower protection ---Lower luck ---Cause your inventory to be soaked in blood. -Prophecy summons --Invidiak, fallen coure eladrin, walking delirium, or earth elemental --Summons arrive in large numbers all over the current level, and stay for 66 turns before being replaced.


Thursday 2021-05-06 02:45:08 by yuuhhe

Revert "There are some special moments in your life when you realize that it's better changing life and learn new stuff. To all the person who helped me through this wonderful experience, I'd like to say thanks. Thank you all guyz ly"

This reverts commit eead69f642fba9e0a0d70508275c4969ce19b20c.


Thursday 2021-05-06 04:21:42 by redmoster55

Security.MD

Do you have. Fun and enjoy life everyone it’s to short to loose it love everyone and everyone is blessed


Thursday 2021-05-06 05:10:47 by Subatomicmc

fuck you im gonna commit every changed line

what are you goingto do about it


Thursday 2021-05-06 07:11:29 by KM198912

fuck you, you do what i want, you dirty whore of a file


Thursday 2021-05-06 07:26:50 by Robert Love

[PATCH] inotify

inotify is intended to correct the deficiencies of dnotify, particularly its inability to scale and its terrible user interface:

    * dnotify requires the opening of one fd per each directory
      that you intend to watch. This quickly results in too many
      open files and pins removable media, preventing unmount.
    * dnotify is directory-based. You only learn about changes to
      directories. Sure, a change to a file in a directory affects
      the directory, but you are then forced to keep a cache of
      stat structures.
    * dnotify's interface to user-space is awful.  Signals?

inotify provides a more usable, simple, powerful solution to file change notification:

    * inotify's interface is a system call that returns a fd, not SIGIO.
  You get a single fd, which is select()-able.
    * inotify has an event that says "the filesystem that the item
      you were watching is on was unmounted."
    * inotify can watch directories or files.

Inotify is currently used by Beagle (a desktop search infrastructure), Gamin (a FAM replacement), and other projects.

See Documentation/filesystems/inotify.txt.

Signed-off-by: Robert Love [email protected] Cc: John McCutchan [email protected] Cc: Christoph Hellwig [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Thursday 2021-05-06 07:28:41 by Robert Love

[PATCH] ppc64: inotify syscalls

inotify system call support for PPC64

[ I don't think we need sys32 compatibility versions--and if we do, I failed in life. ]

Signed-off-by: Robert Love [email protected] Acked-by: Paul Mackerras [email protected] Cc: Benjamin Herrenschmidt [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Thursday 2021-05-06 07:33:26 by Laurent Vivier

[PATCH] UML Support - Ptrace: adds the host SYSEMU support, for UML and general usage

  Jeff Dike <[email protected]>,
  Paolo 'Blaisorblade' Giarrusso <[email protected]>,
  Bodo Stroesser <[email protected]>

Adds a new ptrace(2) mode, called PTRACE_SYSEMU, resembling PTRACE_SYSCALL except that the kernel does not execute the requested syscall; this is useful to improve performance for virtual environments, like UML, which want to run the syscall on their own.

In fact, using PTRACE_SYSCALL means stopping child execution twice, on entry and on exit, and each time you also have two context switches; with SYSEMU you avoid the 2nd stop and so save two context switches per syscall.

Also, some architectures don't have support in the host for changing the syscall number via ptrace(), which is currently needed to skip syscall execution (UML turns any syscall into getpid() to avoid it being executed on the host). Fixing that is hard, while SYSEMU is easier to implement.

  • This version of the patch includes some suggestions of Jeff Dike to avoid adding any instructions to the syscall fast path, plus some other little changes, by myself, to make it work even when the syscall is executed with SYSENTER (but I'm unsure about them). It has been widely tested for quite a lot of time.

  • Various fixed were included to handle the various switches between various states, i.e. when for instance a syscall entry is traced with one of PT_SYSCALL / _SYSEMU / _SINGLESTEP and another one is used on exit. Basically, this is done by remembering which one of them was used even after the call to ptrace_notify().

  • We're combining TIF_SYSCALL_EMU with TIF_SYSCALL_TRACE or TIF_SINGLESTEP to make do_syscall_trace() notice that the current syscall was started with SYSEMU on entry, so that no notification ought to be done in the exit path; this is a bit of a hack, so this problem is solved in another way in next patches.

  • Also, the effects of the patch: "Ptrace - i386: fix Syscall Audit interaction with singlestep" are cancelled; they are restored back in the last patch of this series.

Detailed descriptions of the patches doing this kind of processing follow (but I've already summed everything up).

  • Fix behaviour when changing interception kind #1.

    In do_syscall_trace(), we check the status of the TIF_SYSCALL_EMU flag only after doing the debugger notification; but the debugger might have changed the status of this flag because he continued execution with PTRACE_SYSCALL, so this is wrong. This patch fixes it by saving the flag status before calling ptrace_notify().

  • Fix behaviour when changing interception kind #2: avoid intercepting syscall on return when using SYSCALL again.

    A guest process switching from using PTRACE_SYSEMU to PTRACE_SYSCALL crashes.

    The problem is in arch/i386/kernel/entry.S. The current SYSEMU patch inhibits the syscall-handler to be called, but does not prevent do_syscall_trace() to be called after this for syscall completion interception.

    The appended patch fixes this. It reuses the flag TIF_SYSCALL_EMU to remember "we come from PTRACE_SYSEMU and now are in PTRACE_SYSCALL", since the flag is unused in the depicted situation.

  • Fix behaviour when changing interception kind #3: avoid intercepting syscall on return when using SINGLESTEP.

    When testing 2.6.9 and the skas3.v6 patch, with my latest patch and had problems with singlestepping on UML in SKAS with SYSEMU. It looped receiving SIGTRAPs without moving forward. EIP of the traced process was the same for all SIGTRAPs.

What's missing is to handle switching from PTRACE_SYSCALL_EMU to PTRACE_SINGLESTEP in a way very similar to what is done for the change from PTRACE_SYSCALL_EMU to PTRACE_SYSCALL_TRACE.

I.e., after calling ptrace(PTRACE_SYSEMU), on the return path, the debugger is notified and then wake ups the process; the syscall is executed (or skipped, when do_syscall_trace() returns 0, i.e. when using PTRACE_SYSEMU), and do_syscall_trace() is called again. Since we are on the return path of a SYSEMU'd syscall, if the wake up is performed through ptrace(PTRACE_SYSCALL), we must still avoid notifying the parent of the syscall exit. Now, this behaviour is extended even to resuming with PTRACE_SINGLESTEP.

Signed-off-by: Paolo 'Blaisorblade' Giarrusso [email protected] Cc: Jeff Dike [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Thursday 2021-05-06 07:56:39 by Alan Cox

[PATCH] TTY layer buffering revamp

The API and code have been through various bits of initial review by serial driver people but they definitely need to live somewhere for a while so the unconverted drivers can get knocked into shape, existing drivers that have been updated can be better tuned and bugs whacked out.

This replaces the tty flip buffers with kmalloc objects in rings. In the normal situation for an IRQ driven serial port at typical speeds the behaviour is pretty much the same, two buffers end up allocated and the kernel cycles between them as before.

When there are delays or at high speed we now behave far better as the buffer pool can grow a bit rather than lose characters. This also means that we can operate at higher speeds reliably.

For drivers that receive characters in blocks (DMA based, USB and especially virtualisation) the layer allows a lot of driver specific code that works around the tty layer with private secondary queues to be removed. The IBM folks need this sort of layer, the smart serial port people do, the virtualisers do (because a virtualised tty typically operates at infinite speed rather than emulating 9600 baud).

Finally many drivers had invalid and unsafe attempts to avoid buffer overflows by directly invoking tty methods extracted out of the innards of work queue structs. These are no longer needed and all go away. That fixes various random hangs with serial ports on overflow.

The other change in here is to optimise the receive_room path that is used by some callers. It turns out that only one ldisc uses receive room except asa constant and it updates it far far less than the value is read. We thus make it a variable not a function call.

I expect the code to contain bugs due to the size alone but I'll be watching and squashing them and feeding out new patches as it goes.

Because the buffers now dynamically expand you should only run out of buffering when the kernel runs out of memory for real. That means a lot of the horrible hacks high performance drivers used to do just aren't needed any more.

Description:

tty_insert_flip_char is an old API and continues to work as before, as does tty_flip_buffer_push() [this is why many drivers dont need modification]. It does now also return the number of chars inserted

There are also

tty_buffer_request_room(tty, len)

which asks for a buffer block of the length requested and returns the space found. This improves efficiency with hardware that knows how much to transfer.

and tty_insert_flip_string_flags(tty, str, flags, len)

to insert a string of characters and flags

For a smart interface the usual code is

len = tty_request_buffer_room(tty, amount_hardware_says);
tty_insert_flip_string(tty, buffer_from_card, len);

More description!

At the moment tty buffers are attached directly to the tty. This is causing a lot of the problems related to tty layer locking, also problems at high speed and also with bursty data (such as occurs in virtualised environments)

I'm working on ripping out the flip buffers and replacing them with a pool of dynamically allocated buffers. This allows both for old style "byte I/O" devices and also helps virtualisation and smart devices where large blocks of data suddenely materialise and need storing.

So far so good. Lots of drivers reference tty->flip.*. Several of them also call directly and unsafely into function pointers it provides. This will all break. Most drivers can use tty_insert_flip_char which can be kept as an API but others need more.

At the moment I've added the following interfaces, if people think more will be needed now is a good time to say

int tty_buffer_request_room(tty, size)

Try and ensure at least size bytes are available, returns actual room (may be zero). At the moment it just uses the flipbuf space but that will change. Repeated calls without characters being added are not cumulative. (ie if you call it with 1, 1, 1, and then 4 you'll have four characters of space. The other functions will also try and grow buffers in future but this will be a more efficient way when you know block sizes.

int tty_insert_flip_char(tty, ch, flag)

As before insert a character if there is room. Now returns 1 for success, 0 for failure.

int tty_insert_flip_string(tty, str, len)

Insert a block of non error characters. Returns the number inserted.

int tty_prepare_flip_string(tty, strptr, len)

Adjust the buffer to allow len characters to be added. Returns a buffer pointer in strptr and the length available. This allows for hardware that needs to use functions like insl or mencpy_fromio.

Signed-off-by: Alan Cox [email protected] Cc: Paul Fulghum [email protected] Signed-off-by: Hirokazu Takata [email protected] Signed-off-by: Serge Hallyn [email protected] Signed-off-by: Jeff Dike [email protected] Signed-off-by: John Hawkes [email protected] Signed-off-by: Martin Schwidefsky [email protected] Signed-off-by: Adrian Bunk [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Thursday 2021-05-06 07:59:00 by TwinkleInstituteAB

Create Medical Education from Russian MBBS colleges 2021-22 Twinkle InstituteAB

Russia may be a fine destination for medical aspirants United Nations agency would like to pursue MBBS. Its well-being and quality of life attract students from everywhere in the world. The MBBS course period in Russia is of five years and eight months. following MBBS in Russia is taken into account one amongst the most effective choices for medical students United Nations agency need to check drugs abroad. The explanation why it’s thought-about collectively of the most effective choices is that the advantage of obtaining high-quality education at a really cheap price. The majority of Russia Medical College square measure recognized by MCI, WHO, UNESCO, etc. Students square measure keen to require associate MBBS seat in Russia, thanks to the Russian service provided, and therefore the students United Nations agency get MBBS Admission In Russia square measure commendable. Doing MBBS in Russia may be a nice chance for Indian medical students. The Russian medical universities occupy the highest thirty within the world’s ranking medical faculties.

It is nearly a dream of undergraduates to occupy their seats in high medical universities in Russia and become specialists inside 5.8 years of their MBBS study. Students from everywhere the world take MBBS Abroad in Russia for his or her study of drugs. The graduates from Russia observe clinical fields and hospitals all over across the world. These square measure the particular reasons that attract students to a rustic like Russia. The Russian service provided to the Indian students and therefore the medical coaching profit the scholars in an exceedingly welcome boost which helps them in creating their career fruitful. Russia may be a famous destination for medical students to check MBBS abroad. Aside from Russia there square measure alternative countries like China, Nepal, Germany, the Philippines, Ukraine, Bangladesh, and Kirgiz that supply MBBS study. These countries additionally give a reasonable MBBS course for medical students. despite the fact that the period of the MBBS study is five.8 years, the international students United Nations agency enter to check in Russia don’t seem to be needed to bear one year of the preceding course to induce admission.

Pursuing Medicine In Russia is straightforward as a result of students don’t seem to be needed to clear any entrance examination. Also, the Russian government provides subsidies for education thus that makes the fee of medical universities relatively low. Students will get medical insurance and medical treatment whenever they’re in want. MBBS is instructed in each English and Russian language. They additionally teach the Russian language aboard the medical students in order that it’s straightforward to speak with the native patients and other people living there. Russia has advanced teaching techniques in medical universities, and that they have well-equipped, efficient techniques for the event of medical students each in theory and much.

When we compare the climatic condition of Russia with India, the climate of Russia is sort of completely different from India. Because it lays within the torrid region, the climate is chilly for pretty much six months, which is snug for individuals living in Russia. The country encompasses a terribly completely different climate than changes wholly supported geographical region. The typical winter temperature remains -20 degrees uranologist, throughout fall and summer they rise to a most of twenty-five degrees uranologist. Every house in Russia has fitted with a heating facility thanks to chilly weather. in order that it makes it snug for people to remain inside throughout that season.

Russian medical universities square measure counted among the highest medical universities within the world. They need the leading hospitals and a globally recognized degree, stocked and louvered accommodation, Indian food mess, advanced teaching, and reasonable learning, high-quality education, etc. Russia has the foremost originative medical universities and most of the medical universities in Russia square measure giving a sponsored fee structure. The necessity to hunt when the clinical specification is that the most outstanding expense of a numerous variety of medical organizations.

Indian medical students wish to check MBBS abroad square measure at associate incomparable high. nowadays the planet has shifted towards a high study parameter from selecting the suitable school for higher studies that yield most come on investments (ROI), globally recognized and many of opportunities, and especially, the all-inclusive course of study. The highlight is that the low price of living and therefore the reasonable fees of MBBS in Russia produces Russia the foremost most well-liked destination for MBBS study among Indian medical students. Students will take scholarships yet. The typical fee for MBBS in Russia is between a pair of 4.5 lakhs once a year. Another profit for medical students is when finishing their study they’ll observe drugs anyplace within the world.

Russia provides top-notch medical education and sensible data to students, which is incredibly helpful for today’s generation. Also, students get smart accommodation with recent and quality food on the university field itself. Every year, Russia welcomes over 2 100000 foreign medical students from across the world to check within the high medical faculties in Russia.


Thursday 2021-05-06 08:10:38 by Al Viro

[PATCH] Fix ext2 readdir f_pos re-validation logic

This fixes not one, but two, silly (but admittedly hard to hit) bugs in the ext2 filesystem "readdir()" function. It also cleans up the code to avoid the unnecessary goto mess.

The bugs were related to re-valiating the f_pos value after somebody had either done an "lseek()" on the directory to an invalid offset, or when the offset had become invalid due to a file being unlinked in the directory. The code would not only set the f_version too eagerly, it would also not update f_pos appropriately for when the offset fixup took place.

When that happened, we'd occasionally subsequently fail the readdir() even when we shouldn't (no real harm done, but an ugly printk, and obviously you would end up not necessarily seeing all entries).

Thanks to Masoud Sharbiani [email protected] who noticed the problem and had a test-case for it, and also fixed up a thinko in the first version of this patch.

Signed-off-by: Al Viro [email protected] Acked-by: Masoud Sharbiani [email protected] Signed-off-by: Linus Torvalds [email protected]


Thursday 2021-05-06 08:17:13 by Ingo Molnar

[PATCH] lightweight robust futexes: arch defaults

This patchset provides a new (written from scratch) implementation of robust futexes, called "lightweight robust futexes". We believe this new implementation is faster and simpler than the vma-based robust futex solutions presented before, and we'd like this patchset to be adopted in the upstream kernel. This is version 1 of the patchset.

Background

What are robust futexes? To answer that, we first need to understand what futexes are: normal futexes are special types of locks that in the noncontended case can be acquired/released from userspace without having to enter the kernel.

A futex is in essence a user-space address, e.g. a 32-bit lock variable field. If userspace notices contention (the lock is already owned and someone else wants to grab it too) then the lock is marked with a value that says "there's a waiter pending", and the sys_futex(FUTEX_WAIT) syscall is used to wait for the other guy to release it. The kernel creates a 'futex queue' internally, so that it can later on match up the waiter with the waker - without them having to know about each other. When the owner thread releases the futex, it notices (via the variable value) that there were waiter(s) pending, and does the sys_futex(FUTEX_WAKE) syscall to wake them up. Once all waiters have taken and released the lock, the futex is again back to 'uncontended' state, and there's no in-kernel state associated with it. The kernel completely forgets that there ever was a futex at that address. This method makes futexes very lightweight and scalable.

"Robustness" is about dealing with crashes while holding a lock: if a process exits prematurely while holding a pthread_mutex_t lock that is also shared with some other process (e.g. yum segfaults while holding a pthread_mutex_t, or yum is kill -9-ed), then waiters for that lock need to be notified that the last owner of the lock exited in some irregular way.

To solve such types of problems, "robust mutex" userspace APIs were created: pthread_mutex_lock() returns an error value if the owner exits prematurely - and the new owner can decide whether the data protected by the lock can be recovered safely.

There is a big conceptual problem with futex based mutexes though: it is the kernel that destroys the owner task (e.g. due to a SEGFAULT), but the kernel cannot help with the cleanup: if there is no 'futex queue' (and in most cases there is none, futexes being fast lightweight locks) then the kernel has no information to clean up after the held lock! Userspace has no chance to clean up after the lock either - userspace is the one that crashes, so it has no opportunity to clean up. Catch-22.

In practice, when e.g. yum is kill -9-ed (or segfaults), a system reboot is needed to release that futex based lock. This is one of the leading bugreports against yum.

To solve this problem, 'Robust Futex' patches were created and presented on lkml: the one written by Todd Kneisel and David Singleton is the most advanced at the moment. These patches all tried to extend the futex abstraction by registering futex-based locks in the kernel - and thus give the kernel a chance to clean up.

E.g. in David Singleton's robust-futex-6.patch, there are 3 new syscall variants to sys_futex(): FUTEX_REGISTER, FUTEX_DEREGISTER and FUTEX_RECOVER. The kernel attaches such robust futexes to vmas (via vma->vm_file->f_mapping->robust_head), and at do_exit() time, all vmas are searched to see whether they have a robust_head set.

Lots of work went into the vma-based robust-futex patch, and recently it has improved significantly, but unfortunately it still has two fundamental problems left:

  • they have quite complex locking and race scenarios. The vma-based patches had been pending for years, but they are still not completely reliable.

  • they have to scan every vma at sys_exit() time, per thread!

The second disadvantage is a real killer: pthread_exit() takes around 1 microsecond on Linux, but with thousands (or tens of thousands) of vmas every pthread_exit() takes a millisecond or more, also totally destroying the CPU's L1 and L2 caches!

This is very much noticeable even for normal process sys_exit_group() calls: the kernel has to do the vma scanning unconditionally! (this is because the kernel has no knowledge about how many robust futexes there are to be cleaned up, because a robust futex might have been registered in another task, and the futex variable might have been simply mmap()-ed into this process's address space).

This huge overhead forced the creation of CONFIG_FUTEX_ROBUST, but worse than that: the overhead makes robust futexes impractical for any type of generic Linux distribution.

So it became clear to us, something had to be done. Last week, when Thomas Gleixner tried to fix up the vma-based robust futex patch in the -rt tree, he found a handful of new races and we were talking about it and were analyzing the situation. At that point a fundamentally different solution occured to me. This patchset (written in the past couple of days) implements that new solution. Be warned though - the patchset does things we normally dont do in Linux, so some might find the approach disturbing. Parental advice recommended ;-)

New approach to robust futexes

At the heart of this new approach there is a per-thread private list of robust locks that userspace is holding (maintained by glibc) - which userspace list is registered with the kernel via a new syscall [this registration happens at most once per thread lifetime]. At do_exit() time, the kernel checks this user-space list: are there any robust futex locks to be cleaned up?

In the common case, at do_exit() time, there is no list registered, so the cost of robust futexes is just a simple current->robust_list != NULL comparison. If the thread has registered a list, then normally the list is empty. If the thread/process crashed or terminated in some incorrect way then the list might be non-empty: in this case the kernel carefully walks the list [not trusting it], and marks all locks that are owned by this thread with the FUTEX_OWNER_DEAD bit, and wakes up one waiter (if any).

The list is guaranteed to be private and per-thread, so it's lockless. There is one race possible though: since adding to and removing from the list is done after the futex is acquired by glibc, there is a few instructions window for the thread (or process) to die there, leaving the futex hung. To protect against this possibility, userspace (glibc) also maintains a simple per-thread 'list_op_pending' field, to allow the kernel to clean up if the thread dies after acquiring the lock, but just before it could have added itself to the list. Glibc sets this list_op_pending field before it tries to acquire the futex, and clears it after the list-add (or list-remove) has finished.

That's all that is needed - all the rest of robust-futex cleanup is done in userspace [just like with the previous patches].

Ulrich Drepper has implemented the necessary glibc support for this new mechanism, which fully enables robust mutexes. (Ulrich plans to commit these changes to glibc-HEAD later today.)

Key differences of this userspace-list based approach, compared to the vma based method:

  • it's much, much faster: at thread exit time, there's no need to loop over every vma (!), which the VM-based method has to do. Only a very simple 'is the list empty' op is done.

  • no VM changes are needed - 'struct address_space' is left alone.

  • no registration of individual locks is needed: robust mutexes dont need any extra per-lock syscalls. Robust mutexes thus become a very lightweight primitive - so they dont force the application designer to do a hard choice between performance and robustness - robust mutexes are just as fast.

  • no per-lock kernel allocation happens.

  • no resource limits are needed.

  • no kernel-space recovery call (FUTEX_RECOVER) is needed.

  • the implementation and the locking is "obvious", and there are no interactions with the VM.

Performance

I have benchmarked the time needed for the kernel to process a list of 1 million (!) held locks, using the new method [on a 2GHz CPU]:

  • with FUTEX_WAIT set [contended mutex]: 130 msecs
  • without FUTEX_WAIT set [uncontended mutex]: 30 msecs

I have also measured an approach where glibc does the lock notification [which it currently does for !pshared robust mutexes], and that took 256 msecs - clearly slower, due to the 1 million FUTEX_WAKE syscalls userspace had to do.

(1 million held locks are unheard of - we expect at most a handful of locks to be held at a time. Nevertheless it's nice to know that this approach scales nicely.)

Implementation details

The patch adds two new syscalls: one to register the userspace list, and one to query the registered list pointer:

asmlinkage long sys_set_robust_list(struct robust_list_head __user *head, size_t len);

asmlinkage long sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr, size_t __user *len_ptr);

List registration is very fast: the pointer is simply stored in current->robust_list. [Note that in the future, if robust futexes become widespread, we could extend sys_clone() to register a robust-list head for new threads, without the need of another syscall.]

So there is virtually zero overhead for tasks not using robust futexes, and even for robust futex users, there is only one extra syscall per thread lifetime, and the cleanup operation, if it happens, is fast and straightforward. The kernel doesnt have any internal distinction between robust and normal futexes.

If a futex is found to be held at exit time, the kernel sets the highest bit of the futex word:

#define FUTEX_OWNER_DIED        0x40000000

and wakes up the next futex waiter (if any). User-space does the rest of the cleanup.

Otherwise, robust futexes are acquired by glibc by putting the TID into the futex field atomically. Waiters set the FUTEX_WAITERS bit:

#define FUTEX_WAITERS           0x80000000

and the remaining bits are for the TID.

Testing, architecture support

I've tested the new syscalls on x86 and x86_64, and have made sure the parsing of the userspace list is robust [ ;-) ] even if the list is deliberately corrupted.

i386 and x86_64 syscalls are wired up at the moment, and Ulrich has tested the new glibc code (on x86_64 and i386), and it works for his robust-mutex testcases.

All other architectures should build just fine too - but they wont have the new syscalls yet.

Architectures need to implement the new futex_atomic_cmpxchg_inuser() inline function before writing up the syscalls (that function returns -ENOSYS right now).

This patch:

Add placeholder futex_atomic_cmpxchg_inuser() implementations to every architecture that supports futexes. It returns -ENOSYS.

Signed-off-by: Ingo Molnar [email protected] Signed-off-by: Thomas Gleixner [email protected] Signed-off-by: Arjan van de Ven [email protected] Acked-by: Ulrich Drepper [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Thursday 2021-05-06 08:31:17 by Eric W. Biederman

[PATCH] proc: Use sane permission checks on the /proc//fd/ symlinks

Since 2.2 we have been doing a chroot check to see if it is appropriate to return a read or follow one of these magic symlinks. The chroot check was asking a question about the visibility of files to the calling process and it was actually checking the destination process, and not the files themselves. That test was clearly bogus.

In my first pass through I simply fixed the test to check the visibility of the files themselves. That naive approach to fixing the permissions was too strict and resulted in cases where a task could not even see all of it's file descriptors.

What has disturbed me about relaxing this check is that file descriptors are per-process private things, and they are occasionaly used a user space capability tokens. Looking a little farther into the symlink path on /proc I did find userid checks and a check for capability (CAP_DAC_OVERRIDE) so there were permissions checking this.

But I was still concerned about privacy. Besides /proc there is only one other way to find out this kind of information, and that is ptrace. ptrace has been around for a long time and it has a well established security model.

So after thinking about it I finally realized that the permission checks that make sense are the permission checks applied to ptrace_attach. The checks are simple per process, and won't cause nasty surprises for people coming from less capable unices.

Unfortunately there is one case that the current ptrace_attach test does not cover: Zombies and kernel threads. Single stepping those kinds of processes is impossible. Being able to see which file descriptors are open on these tasks is important to lsof, fuser and friends. So for these special processes I made the rule you can't find out unless you have CAP_SYS_PTRACE.

These proc permission checks should now conform to the principle of least surprise. As well as using much less code to implement :)

Signed-off-by: Eric W. Biederman [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Thursday 2021-05-06 08:33:26 by Ingo Molnar

[PATCH] pi-futex: futex code cleanups

We are pleased to announce "lightweight userspace priority inheritance" (PI) support for futexes. The following patchset and glibc patch implements it, ontop of the robust-futexes patchset which is included in 2.6.16-mm1.

We are calling it lightweight for 3 reasons:

  • in the user-space fastpath a PI-enabled futex involves no kernel work (or any other PI complexity) at all. No registration, no extra kernel calls - just pure fast atomic ops in userspace.

  • in the slowpath (in the lock-contention case), the system call and scheduling pattern is in fact better than that of normal futexes, due to the 'integrated' nature of FUTEX_LOCK_PI. [more about that further down]

  • the in-kernel PI implementation is streamlined around the mutex abstraction, with strict rules that keep the implementation relatively simple: only a single owner may own a lock (i.e. no read-write lock support), only the owner may unlock a lock, no recursive locking, etc.

Priority Inheritance - why, oh why???

Many of you heard the horror stories about the evil PI code circling Linux for years, which makes no real sense at all and is only used by buggy applications and which has horrible overhead. Some of you have dreaded this very moment, when someone actually submits working PI code ;-)

So why would we like to see PI support for futexes?

We'd like to see it done purely for technological reasons. We dont think it's a buggy concept, we think it's useful functionality to offer to applications, which functionality cannot be achieved in other ways. We also think it's the right thing to do, and we think we've got the right arguments and the right numbers to prove that. We also believe that we can address all the counter-arguments as well. For these reasons (and the reasons outlined below) we are submitting this patch-set for upstream kernel inclusion.

What are the benefits of PI?

The short reply:

User-space PI helps achieving/improving determinism for user-space applications. In the best-case, it can help achieve determinism and well-bound latencies. Even in the worst-case, PI will improve the statistical distribution of locking related application delays.

The longer reply:

Firstly, sharing locks between multiple tasks is a common programming technique that often cannot be replaced with lockless algorithms. As we can see it in the kernel [which is a quite complex program in itself], lockless structures are rather the exception than the norm - the current ratio of lockless vs. locky code for shared data structures is somewhere between 1:10 and 1:100. Lockless is hard, and the complexity of lockless algorithms often endangers to ability to do robust reviews of said code. I.e. critical RT apps often choose lock structures to protect critical data structures, instead of lockless algorithms. Furthermore, there are cases (like shared hardware, or other resource limits) where lockless access is mathematically impossible.

Media players (such as Jack) are an example of reasonable application design with multiple tasks (with multiple priority levels) sharing short-held locks: for example, a highprio audio playback thread is combined with medium-prio construct-audio-data threads and low-prio display-colory-stuff threads. Add video and decoding to the mix and we've got even more priority levels.

So once we accept that synchronization objects (locks) are an unavoidable fact of life, and once we accept that multi-task userspace apps have a very fair expectation of being able to use locks, we've got to think about how to offer the option of a deterministic locking implementation to user-space.

Most of the technical counter-arguments against doing priority inheritance only apply to kernel-space locks. But user-space locks are different, there we cannot disable interrupts or make the task non-preemptible in a critical section, so the 'use spinlocks' argument does not apply (user-space spinlocks have the same priority inversion problems as other user-space locking constructs). Fact is, pretty much the only technique that currently enables good determinism for userspace locks (such as futex-based pthread mutexes) is priority inheritance:

Currently (without PI), if a high-prio and a low-prio task shares a lock [this is a quite common scenario for most non-trivial RT applications], even if all critical sections are coded carefully to be deterministic (i.e. all critical sections are short in duration and only execute a limited number of instructions), the kernel cannot guarantee any deterministic execution of the high-prio task: any medium-priority task could preempt the low-prio task while it holds the shared lock and executes the critical section, and could delay it indefinitely.

Implementation:

As mentioned before, the userspace fastpath of PI-enabled pthread mutexes involves no kernel work at all - they behave quite similarly to normal futex-based locks: a 0 value means unlocked, and a value==TID means locked. (This is the same method as used by list-based robust futexes.) Userspace uses atomic ops to lock/unlock these mutexes without entering the kernel.

To handle the slowpath, we have added two new futex ops:

FUTEX_LOCK_PI FUTEX_UNLOCK_PI

If the lock-acquire fastpath fails, [i.e. an atomic transition from 0 to TID fails], then FUTEX_LOCK_PI is called. The kernel does all the remaining work: if there is no futex-queue attached to the futex address yet then the code looks up the task that owns the futex [it has put its own TID into the futex value], and attaches a 'PI state' structure to the futex-queue. The pi_state includes an rt-mutex, which is a PI-aware, kernel-based synchronization object. The 'other' task is made the owner of the rt-mutex, and the FUTEX_WAITERS bit is atomically set in the futex value. Then this task tries to lock the rt-mutex, on which it blocks. Once it returns, it has the mutex acquired, and it sets the futex value to its own TID and returns. Userspace has no other work to perform - it now owns the lock, and futex value contains FUTEX_WAITERS|TID.

If the unlock side fastpath succeeds, [i.e. userspace manages to do a TID -> 0 atomic transition of the futex value], then no kernel work is triggered.

If the unlock fastpath fails (because the FUTEX_WAITERS bit is set), then FUTEX_UNLOCK_PI is called, and the kernel unlocks the futex on the behalf of userspace - and it also unlocks the attached pi_state->rt_mutex and thus wakes up any potential waiters.

Note that under this approach, contrary to other PI-futex approaches, there is no prior 'registration' of a PI-futex. [which is not quite possible anyway, due to existing ABI properties of pthread mutexes.]

Also, under this scheme, 'robustness' and 'PI' are two orthogonal properties of futexes, and all four combinations are possible: futex, robust-futex, PI-futex, robust+PI-futex.

glibc support:

Ulrich Drepper and Jakub Jelinek have written glibc support for PI-futexes (and robust futexes), enabling robust and PI (PTHREAD_PRIO_INHERIT) POSIX mutexes. (PTHREAD_PRIO_PROTECT support will be added later on too, no additional kernel changes are needed for that). [NOTE: The glibc patch is obviously inofficial and unsupported without matching upstream kernel functionality.]

the patch-queue and the glibc patch can also be downloaded from:

http://redhat.com/~mingo/PI-futex-patches/

Many thanks go to the people who helped us create this kernel feature: Steven Rostedt, Esben Nielsen, Benedikt Spranger, Daniel Walker, John Cooper, Arjan van de Ven, Oleg Nesterov and others. Credits for related prior projects goes to Dirk Grambow, Inaky Perez-Gonzalez, Bill Huey and many others.

Clean up the futex code, before adding more features to it:

  • use u32 as the futex field type - that's the ABI
  • use __user and pointers to u32 instead of unsigned long
  • code style / comment style cleanups
  • rename hash-bucket name from 'bh' to 'hb'.

I checked the pre and post futex.o object files to make sure this patch has no code effects.

Signed-off-by: Ingo Molnar [email protected] Signed-off-by: Thomas Gleixner [email protected] Signed-off-by: Arjan van de Ven [email protected] Cc: Ulrich Drepper [email protected] Cc: Jakub Jelinek [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Thursday 2021-05-06 09:09:10 by WatIsDeze

Commit this god awful stuff I have had to put up with. It seems to work, so let's not touch that shit anymore lmao :D


Thursday 2021-05-06 09:43:22 by Marko Grdinić

"9:05am. I am up. Right now, it is not the tedium blocking me. I've made progress on that front.

What is blocking me is simply fear. Pure, irrational fear.

I gave it my all to come up with that optimization scheme. Rescaling the gradients, rebalancing them, and also the replay buffer trickery. There is no way to improve backprop past this point.

Usually when I feel this way I'd seek out more information, but I know that there is nothing out there that can help me anymore.

9:10am. I have to go forward and take fate into my own hands. The shame of not having unlimited inspiration is something I will make up for once I enter the self improvement loop.

There are just too many things I cannot immitate with my programming talent alone at this juncture.

9:15am. The answer exists for quite a lot of things. For long term memory, I said that it needs to be figured out, but the TD function has an answer for how to transfer value across arbitrary time. So it is not like there isn't an answer for that either. Ilya did point out that weights can serve as long term memory.

Maybe I was too rough yesterday in saying that the interview is worthless. I did learn just a bit from all of them.

But I am not going to find an answer to my fear in them.

9:30am. I am definitely still missing skill. The pieces needed to handle the nets in a trully modular fashion aren't here yet. I cannot become an expert with just this. But that is fine.

The work I will do in the present will compose with what will come in the future.

9:35am. Let me do my chilling and then I will start. The programming I will be doing here is purely for overcoming my fright.

I might have regrets, but all the mistakes I did were due to acting according to some emotion. I can put on the brakes and I can accelerate, but I cannot change what I feel. If I could, I would no longer be human. So I won't pretend that I am something greater. I won't chastise myself over my weakness, as that will result in nothing.

All I have to do is get closer to my goal. That is it.

10:05am. Done with the break. Today and from here on out, even if I have to write 10 lines a day, I will try to move forward.

There is nothing wrong with my knowledge or my programming skill.

It is my spirit that is weak here. It is not that I am too lazy to work. It is not that I cannot overcome the tedious parts. It is that I am intimidated by the grandness of the task.

10:10am. I need to overcome it. If I can do this, I could attain for the first time in my life actual power. Am I really going to stumble right before the finish line?

My biggest enemy is myself here.

Forget the pride and the path traveled to get here. I did not have to make a language. I did not have to do all of this.

I could have thought of this scheme and done it in Python. Forget the extraneous thoughts.

Right now, I am once again a bare beginner. Throw away the current power. Go back to what I were.

If I cannot write 100 lines, write 50. If I cannot do 50, do 20. Do as much as you can even when you do not feel like it.

Eventually the ice will break and the momentum will start to build.

A true expert could do the challenge of implementing the ideas in the review in a week or two. So what if it takes me 4 times as long. I should be struggling here.

10:30am.

inl nodes_2p forall game_state o a. is_choice (player_funs fp1, player_funs fp2)
        : game2p game_state o a (pl2 o a -> r2) = game2p {
    terminal = fun (s1,s2) r (chance_prob,p1,p2) =>
        fp1.terminal {chance_prob game_state=s1; id=0; player=p1; reward=r} . fp2.terminal {chance_prob game_state=s2; id=1; player=p2; reward=r}
        r
    action = fun s pid ar f (chance_prob,p1,p2) =>
        let f (a : a) (b : pl2 o a) : r2 = real real_core.unbox a (fun _ => f a b)
        if pid = 0 then fp1.action {chance_prob game_state=s; id=0; player=p1; player'=prob p2; actions=ar; next=fun (_,a as cs) => f a (chance_prob,apply_changes p1 cs,apply_action p2 a)}
        else fp2.action {chance_prob game_state=s; player=p2; id=1; player'=prob p1; actions=ar; next=fun (_,a as cs) => f a (chance_prob,apply_action p1 a,apply_changes p2 cs)}
    draw = (if is_choice then choice else iter) draw
    sample = (if is_choice then choice else iter) sample
    }

What do I do about this? There is just too much stuff here.

Ok, the main idea that springs to mind is that I should ignore the parallel processing considerations. How should only a single thread of this work? What should I focus on?

union rec compiled_node obs act =
    | Reward: r2
    | Action: obs * a u64 act * (act -> compiled_node obs act)

Would this be right?

For the time being I am going to forget what obs should be here. In other places it means a list obs act, but here it can be more generic.

10:50am.

union observation o a = Observation: o | Action: a
nominal player o a = { prob : log_prob; observations : list (observation o a) }
inl init forall observation action. : player observation action = player {prob=Log_prob_one; observations=Nil} |> dyn
nominal player_funs game_state obs act r = {
    action : {game_state : game_state; id : u8; player : player obs act;  player' : log_prob; chance_prob : log_prob; actions : a u64 act; next : log_prob * act -> r} -> r
    terminal : {game_state : game_state; id : u8; player : player obs act; chance_prob : log_prob; reward : r2} -> ()
    }
type pl2 o a = log_prob * player o a * player o a
type pl2_compiled = {chance : log_prob; p1 : log_prob; p2 : log_prob}

Let me add pl2_compiled to the mix.

10:55am. Hahhh...

type pl2_compiled = {chance : log_prob; p1 : log_prob; p2 : log_prob}

inl prob (player {prob}) = prob
inl observations (player {observations}) = observations
inl apply_action (player x) a = player {x with observations#=(::) (Action: a)}
inl apply_observation (player x) o = player {x with observations#=(::) (Observation: o)}
inl apply_changes (player x) (prob,a) = player {x with observations#=(::) (Action: a); prob#=(+@) prob}

inl sample_players_update pid (prob,x) (chance_prob,p1,p2) =
    prob +@ chance_prob,
    match pid with
    | Some: pid =>
        inl update pid' p = if pid = pid' then apply_observation p x else p
        update 0 p1, update 1 p2
    | None =>
        inl update p = apply_observation p x
        update p1, update p2

// Indexes randomly into an uniform categorical distrbution, weighting the choice by its probability.
inl choice one pid dist = one true (sampling.randomInLength dist) pid dist

I don't know. The original version makes a lot of sense, especially for tabular players. But my addons do not really fit into the framework.

11:05am. I said I would not redesign all of this, but should I just wipe and start anew? I am not really inspired right now, and the old material is hindering me more than it is helping me. I need to redo my reasoning.

But I do feel it is a major step forward to ignore the parallel considerations. This will simplify my thinking a lot.

I haven't really though on how I am going to do this first step in the past couple of days. Instead I've more been working through the mental obstacles preventing me from starting.

I know that at its root, I need to rewrite all of this so it is interpretable.

...Let me take a short break here.

11:40am. My focus really is not on programming once again. Instead I am again fighting my demons. Let me get breakfast. If necessary I will step away from the screen.

I need to beat my past regrets and overcome my fear. Whatever sparse inspiration bubbles to the surface gets taken over by the thoughts of them. I should gather the resolve and redo the whole project from scratch in line with what I will need for a tree version."


Thursday 2021-05-06 10:22:09 by Turlough Mullan

Signals work even with cat (no args). Tried some stuff in export but holy fuck it's hateful lol


Thursday 2021-05-06 11:45:30 by Ryan Lucas

Fixed failing test

I think this is a significant amount of changes for a PR now.

  • Bug fix: The failing tests was relatively simple to fix, I added code in Client during the generate_request() call to read everything left from the client until content-length in the parser was 0 - this should ensure that the entire body of a client's message is always read, even when the body doesn't end in a CRLF. Any information after a CRLF is ignored.
  • Bug fix: While fixing the aforementioned bug, I happened to run the tests through valgrind, and was met with an insane amount of error messages concerning "Uninitialized values". Long story short, after a couple of painful hours, I learned you can't pass a char* by reference into a lambda in C++. Errors vanished after passing by value.
  • Bug fix: I found valgrind errors in the Socket connection test. Discovered it was due to an uninitialized socklen_t. Fixed now by initializing to 0.

Next goals remain the same as before. Next PR will be to do with a Response class, and there should be testing for this too. As there is no parsing required for this class however, it should be simpler than the Request class.


Thursday 2021-05-06 13:36:53 by Muhammad Ahmad Selim

Pima Indian Diabetes

Context This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.

Content The datasets consists of several medical predictor variables and one target variable, Outcome. Predictor variables includes the number of pregnancies the patient has had, their BMI, insulin level, age, and so on.

Acknowledgements Smith, J.W., Everhart, J.E., Dickson, W.C., Knowler, W.C., & Johannes, R.S. (1988). Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the Symposium on Computer Applications and Medical Care (pp. 261--265). IEEE Computer Society Press.

Inspiration Can you build a machine learning model to accurately predict whether or not the patients in the dataset have diabetes or not?


Thursday 2021-05-06 14:23:58 by Perry Greenwood

Andromeda and Perry

Both of these people are mad scientists and geniuses. Unfortunately, they have a less than average sister named Aurora who writes mean things about herself because her younger brother thinks it's soooooooooooooooooooo funny.


Thursday 2021-05-06 17:40:57 by Marko Grdinić

"12:30pm. Let me go to bed here. I am not ready to start after all. But I'll leave out fiction and frivolous pursuits in return. Until I am ready, I'll stay away from the screen.

7:10pm. Up to a bit over 6pm, I spent all the time since the last entry in bed. The fear is giving way to great sadness which is has abated by half, but I am still not ready to start. I think that one more day of this should be enough to give me the kick that I need.

I did spend some time thinking about programming as well.

And it is harsh. It is really difficult to adapt enumerative methods to the GPU. It might be possible in theory, but the reasoning gets hard really quickly. I do not want to go down this path. Instead, the outcome sampling methods are the ones that are easily tractable in parallel. I can thing of a few different variants. I am not sure which I will end up with in the end, but I'll start off with the straightforward one, with no TD learning or cutoffs.

It is a lot easier to start off with a fixed number of particles and filter them as things go along.

With enumerative methods, I have no idea how I would deal with chance nodes.

I really did have the intention of doing enumerative CFR at least for Leduc even one the GPU, but I do not want to bother with it.

It is true that the first time I implemented sampling CFR I had difficulty getting it to work, but that was because of all the mistakes that I won't repeat. It really isn't that much more difficult than the vanilla.

7:30pm. I have the methods that I need, but if only I can get my mood to improve I should be able to start. The shock treatment away from the screen should be the most effective. Eventually I'll get tired of moping around.

To start things off, I am first going to implement sampling CFR's parallel version in the tabular regime. After that I'll focus on dealing with NNs.

7:35pm. Sigh, let me get back to RI. Once I catch up, I will finally be free.

7:40pm. I'll go to bed early today as well, so let me chill a bit here."


Thursday 2021-05-06 18:44:53 by Andrés J. Díaz

feat: add abbreviations

Abbreviations are a way to replace some text on input post or reply by a conventional one. For example replacing IMHO for in my humble opinnion.

This commit also add emoji support as abbreviation. For instance use :love: to put a lovely hearth in your message. You will need to enable it via config setting input.emojis to true.


Thursday 2021-05-06 18:45:49 by mkietzm4n

I hate closure so, so much. I can't even get this pos project to build. Yes, ok I'm missing a jarfile, let me get that so I can fucking build. Oh wait but literally every way I try to get it fails miserably and no one on github can help me. Closure, soy, gulp, all of it, it's all a piece of shit we framework that I am 100000% certain will fade into oblivion. Good riddance.


Thursday 2021-05-06 19:23:06 by DarthVader125

<script type="text/javascript" id="akamaiRootBlock"> window['akamaiRoot'] = '//www.salesforce.com'; </script> <title>Chatter - The Enterprise Social Network & Collaboration Software - Salesforce.com</title> <script type="text/javascript" src="//www.salesforce.com/etc/clientlibs/sfdc-aem-master/sfdc_jquery.min.d6ea05d15a13f90cbddc2a00c4eb5f05.js"></script> <script src="https://a.sfdcstatic.com/enterprise/salesforce/prod/6140/oneTrust/scripttemplates/otSDKStub.js" type="text/javascript" charset="UTF-8" data-domain-script="742a15b9-6aa4-4c2f-99c1-ad4ca220cf96"></script> <script> function OptanonWrapper() { function getCookie(name) { var value = "; " + document.cookie; var parts = value.split("; " + name + "="); if (parts.length == 2) { return parts.pop().split(";").shift(); } } function removeElement(element) { if (!getCookie('OptanonAlertBoxClosed') && element) { element.style.display = "none"; } } var footerLinkToggle = document.querySelector(".page-footer_link .optanon-toggle-display"); if (footerLinkToggle) { footerLinkToggle.addEventListener("click", Optanon.ToggleInfoDisplay, false); footerLinkToggle.addEventListener("keydown", function(e){ if (e.keyCode === 13) { Optanon.ToggleInfoDisplay() } }, false); } //Check if user's cookies are enabled, if not remove One Trust from page var cookies = ("cookie" in document && (document.cookie.length > 0 || (document.cookie = "test").indexOf.call(document.cookie, "test") > -1)); if (!cookies) { var box = document.querySelector('#onetrust-consent-sdk'); box.remove(); return; } try { //Check if current page is Privacy page, if so do not display One Trust modal if(digitalData) { if(digitalData.page.pagename.indexOf(":company:privacy") > -1){ var el = document.querySelector("#onetrust-consent-sdk"); removeElement(el); } } if (SfdcWwwBase.gdpr) { SfdcWwwBase.gdpr.init(); } }catch(err){ console.error(err.message) } } </script> <script type="text/javascript" src="//www.salesforce.com/etc/clientlibs/sfdc-aem-master/clientlibs_onetrust.min.f035bbb84c5eb610b5cae93bef0d6014.js"></script> <script type="text/javascript" src="//www.salesforce.com/etc/clientlibs/granite/lodash/modern.min.3a0ad4c7614495b1cae264dfcb9b9813.js"></script> <script type="text/javascript" src="//www.salesforce.com/etc/clientlibs/sfdc-aem-master/clientlibs_analytics_top.min.8a963051768f1ee0be822df84a226fe2.js"></script> <script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer', (function(){ var gtmContainerID = "GTM-WRXS6TH"; var searchString = window.location.search || ""; if (searchString.indexOf("gtmTest=") > -1) { if (searchString.indexOf("gtmTest=baseline") > -1) { gtmContainerID = "GTM-NRZ2K87"; } else if (searchString.indexOf("gtmTest=test") > -1) { gtmContainerID = "GTM-5P8WRDB"; } } return gtmContainerID; })());</script> <script type="text/javascript"> var SfdcWwwBase = SfdcWwwBase || {}; SfdcWwwBase.linkedDataParameters = { organizationSchema : "[\n{ \x22@context\x22:\x22https:\/\/schema.org\x22,\n \x22@type\x22:\x22Organization\x22,\n \x22@id\x22:\x22https:\/\/www.salesforce.com\/#organization\x22,\n \x22url\x22:\x22https:\/\/www.salesforce.com\/\x22,\n \x22name\x22:\x22Salesforce.com\x22,\n \x22sameAs\x22: [\n \x22https:\/\/www.wikidata.org\/wiki\/Q941127\x22,\n \x22https:\/\/en.wikipedia.org\/wiki\/Salesforce.com\x22,\n \x22https:\/\/www.crunchbase.com\/organization\/salesforce\x22,\n \x22https:\/\/www.instagram.com\/salesforce\/\x22,\n \x22https:\/\/www.facebook.com\/salesforce\x22,\n \x22https:\/\/twitter.com\/salesforce\x22,\n \x22https:\/\/www.linkedin.com\/company\/salesforce\x22,\n \x22https:\/\/www.youtube.com\/Salesforce\x22],\n \x22subOrganization\x22: [\n {\n \x22@type\x22: \x22Organization\x22,\n \x22@id\x22: \x22https:\/\/www.salesforce.com\/eu\/#organization\x22,\n \x22name\x22: \x22Salesforce EMEA\x22\n },\n {\n \x22@type\x22: \x22Organization\x22,\n \x22@id\x22: \x22https:\/\/www.salesforce.com\/uk\/#organization\x22,\n \x22name\x22: \x22Salesforce UK\x22\n },\n { \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/mx\/#organization\x22, \n \x22name\x22: \x22Salesforce LATAM\x22 },\n { \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/br\/#organization\x22, \n \x22name\x22: \x22Salesforce Brazil\x22 },\n { \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/ca\/#organization\x22, \n \x22name\x22: \x22Salesforce Canada\x22 },\n { \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/fr\u002Dca\/#organization\x22, \n \x22name\x22: \x22Salesforce Canada (French)\x22 },\n { \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/es\/#organization\x22, \n \x22name\x22: \x22Salesforce España\x22},\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/de\/#organization\x22, \n \x22name\x22: \x22Salesforce Deutschland\x22},\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/fr\/#organization\x22, \n \x22name\x22: \x22Salesforce France\x22},\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/it\/#organization\x22, \n \x22name\x22: \x22Salesforce Italia\x22},\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/nl\/#organization\x22, \n \x22name\x22: \x22Salesforce Nederland\x22},\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/se\/#organization\x22, \n \x22name\x22: \x22Salesforce Sverige\x22},\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/au\/#organization\x22, \n \x22name\x22: \x22Salesforce Australia\x22 },\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/in\/#organization\x22, \n \x22name\x22: \x22Salesforce India\x22 },\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/jp\/#organization\x22, \n \x22name\x22: \x22Salesforce 日本\x22 },\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/cn\/#organization\x22, \n \x22name\x22: \x22Salesforce 中国 \x22 },\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/hk\/#organization\x22, \n \x22name\x22: \x22Salesforce 香港\x22 },\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/tw\/#organization\x22, \n \x22name\x22: \x22Salesforce 台灣\x22 },\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/kr\/#organization\x22, \n \x22name\x22: \x22Salesforce 한국\x22 },\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/my\/#organization\x22, \n \x22name\x22: \x22Salesforce Malaysia\x22 },\n{ \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/th\/#organization\x22, \n \x22name\x22: \x22Salesforce ประเทศไทย (https:\/\/www.salesforce.com\/th\/)\x22 },\n { \n \x22@type\x22: \x22Organization\x22, \n \x22@id\x22: \x22https:\/\/www.salesforce.com\/ap\/#organization\x22, \n \x22name\x22: \x22Salesforce APAC\x22 }\n ],\n \x22logo\x22:\x22https:\/\/www.sfdcstatic.com\/common\/assets\/img\/logo\u002Dcompany\u002Dlarge.png\x22,\n \x22address\x22:{ \n \x22@type\x22:\x22PostalAddress\x22,\n \x22streetAddress\x22:\x22415 Mission Street, 3rd Floor\x22,\n \x22addressLocality\x22:\x22San Francisco\x22,\n \x22addressRegion\x22:\x22CA\x22,\n \x22postalCode\x22:\x2294105\x22,\n \x22addressCountry\x22:\x22United States\x22\n },\n \x22contactPoint\x22:[ \n { \n \x22@type\x22:\x22ContactPoint\x22,\n \x22telephone\x22:\x22+1\u002D800\u002D667\u002D6389\x22,\n \x22contactOption\x22:\x22TollFree\x22,\n \x22areaServed\x22: [\x22US\x22,\x22CA\x22],\n \x22contactType\x22:\x22customer service\x22,\n \x22availableLanguage\x22:{ \n \x22@type\x22:\x22Language\x22,\n \x22name\x22:\x22English\x22\n }\n },\n { \n \x22@type\x22:\x22ContactPoint\x22,\n \x22telephone\x22:\x22+1\u002D800\u002DNO\u002DSOFTWARE\x22,\n \x22contactOption\x22:\x22TollFree\x22,\n \x22areaServed\x22: [\x22US\x22,\x22CA\x22],\n \x22contactType\x22:[\x22sales\x22, \x22billing support\x22, \x22technical support\x22],\n \x22availableLanguage\x22:{ \n \x22@type\x22:\x22Language\x22,\n \x22name\x22:\x22English\x22\n }\n }\n ]\n },\n{\n \x22@context\x22:\x22http:\/\/schema.org\x22,\n \x22@type\x22:\x22Website\x22,\n \x22@id\x22:\x22https:\/\/www.salesforce.com\/#website\x22,\n \x22url\x22:\x22https:\/\/www.salesforce.com\/\x22,\n \x22sameAs\x22:[\n\x22https:\/\/www.salesforce.com\/ap\/#website\x22,\n\x22https:\/\/www.salesforce.com\/au\/#website\x22,\n\x22https:\/\/www.salesforce.com\/br\/#website\x22,\n\x22https:\/\/www.salesforce.com\/ca\/#website\x22,\n\x22https:\/\/www.salesforce.com\/cn\/#website\x22,\n\x22https:\/\/www.salesforce.com\/de\/#website\x22,\n\x22https:\/\/www.salesforce.com\/es\/#website\x22,\n\x22https:\/\/www.salesforce.com\/eu\/#website\x22,\n\x22https:\/\/www.salesforce.com\/fr\u002Dca\/#website\x22,\n\x22https:\/\/www.salesforce.com\/fr\/#website\x22,\n\x22https:\/\/www.salesforce.com\/hk\/#website\x22,\n\x22https:\/\/www.salesforce.com\/in\/#website\x22,\n\x22https:\/\/www.salesforce.com\/it\/#website\x22,\n\x22https:\/\/www.salesforce.com\/jp\/#website\x22,\n\x22https:\/\/www.salesforce.com\/kr\/#website\x22,\n\x22https:\/\/www.salesforce.com\/mx\/#website\x22,\n\x22https:\/\/www.salesforce.com\/my\/#website\x22,\n\x22https:\/\/www.salesforce.com\/nl\/#website\x22,\n\x22https:\/\/www.salesforce.com\/se\/#website\x22,\n\x22https:\/\/www.salesforce.com\/th\/#website\x22,\n\x22https:\/\/www.salesforce.com\/tw\/#website\x22,\n\x22https:\/\/www.salesforce.com\/uk\/#website\x22\n ],\n \x22publisher\x22:{\n \x22@id\x22:\x22https:\/\/www.salesforce.com\/#organization\x22\n },\n \x22potentialAction\x22:{\n \x22@type\x22:\x22SearchAction\x22,\n \x22target\x22:\x22https:\/\/www.salesforce.com\/search\/#q={term}\x26sort=relevancy\x22,\n \x22query\u002Dinput\x22:\x22required name=term\x22\n }\n}\n]", uninheritableSchema : "" }; </script> <script> window.optimizely = window.optimizely || []; window.optimizely.push({ type : 'holdEvents'}); window.addEventListener('load', function() { window.optimizely.push({ type : 'sendEvents'}); }); </script> <script src="https://cdn.optimizely.com/js/10681260716.js"></script> <script type="text/javascript" src="//www.salesforce.com/etc/clientlibs/sfdc-aem-master/clientlibs_www_tags.min.49c634c0df8e725801cecc00b8a87f20.js"></script> <script> var _aaq = window._aaq || (window._aaq = []); </script> <script type="text/javascript" src="//cdn.evgnet.com/beacon/salesforce/sfprod/scripts/evergage.min.js" async></script> <script>(window.BOOMR_mq=window.BOOMR_mq||[]).push(["addVar",{"rua.upush":"false","rua.cpush":"false","rua.upre":"false","rua.cpre":"false","rua.uprl":"false","rua.cprl":"false","rua.cprf":"false","rua.trans":"","rua.cook":"false","rua.ims":"false","rua.ufprl":"false","rua.cfprl":"false","rua.isuxp":"","rua.texp":""}]);</script> <script>!function(a){var e="https://s.go-mpulse.net/boomerang/",t="addEventListener";if("False"=="True")a.BOOMR_config=a.BOOMR_config||{},a.BOOMR_config.PageParams=a.BOOMR_config.PageParams||{},a.BOOMR_config.PageParams.pci=!0,e="https://s2.go-mpulse.net/boomerang/";if(window.BOOMR_API_key="NCPYV-VGJPP-N4J93-8HN3B-8B6S3",function(){function n(e){a.BOOMR_onload=e&&e.timeStamp||(new Date).getTime()}if(!a.BOOMR||!a.BOOMR.version&&!a.BOOMR.snippetExecuted){a.BOOMR=a.BOOMR||{},a.BOOMR.snippetExecuted=!0;var i,_,o,r=document.createElement("iframe");if(a[t])a[t]("load",n,!1);else if(a.attachEvent)a.attachEvent("onload",n);r.src="javascript:void(0)",r.title="",r.role="presentation",(r.frameElement||r).style.cssText="width:0;height:0;border:0;display:none;",o=document.getElementsByTagName("script")[0],o.parentNode.insertBefore(r,o);try{_=r.contentWindow.document}catch(O){i=document.domain,r.src="javascript:var d=document.open();d.domain='"+i+"';void(0);",_=r.contentWindow.document}_.open()._l=function(){var a=this.createElement("script");if(i)this.domain=i;a.id="boomr-if-as",a.src=e+"NCPYV-VGJPP-N4J93-8HN3B-8B6S3",BOOMR_lstart=(new Date).getTime(),this.body.appendChild(a)},_.write("'),_.close()}}(),"".length>0)if(a&&"performance"in a&&a.performance&&"function"==typeof a.performance.setResourceTimingBufferSize)a.performance.setResourceTimingBufferSize();!function(){if(BOOMR=a.BOOMR||{},BOOMR.plugins=BOOMR.plugins||{},!BOOMR.plugins.AK){var e=""=="true"?1:0,t="",n="eybybacjamje4jqacqcaajyaabqjiqme-f-bfd25ac88-clienttons-s.akamaihd.net",i={"ak.v":"31","ak.cp":"1122173","ak.ai":parseInt("638429",10),"ak.ol":"0","ak.cr":36,"ak.ipv":6,"ak.proto":"h2","ak.rid":"671eb4a","ak.r":28267,"ak.a2":e,"ak.m":"dsca","ak.n":"essl","ak.bpcip":"2603:8080:4903:124e::","ak.cport":45450,"ak.gh":"23.10.248.148","ak.quicv":"","ak.tlsv":"tls1.3","ak.0rtt":"","ak.csrc":"-","ak.acc":"","ak.t":"1620328836","ak.ak":"hOBiQwZUYzCg5VSAfCLimQ==s0iq76GMdCE6qTg9Yakp1KYEpC6M6mJxeYGO/SJRtCCkwWogHTXpfEUh36OAQIbqLuGhqFz6m835OcMpoxjggz0FLZBZBuPPT+fLUTtObQEfClTytXQoDS7VNNEEf77y7TcaeAos0T5gPjjr5E38xolhUeI75DTBFl95NyAuH8kGrraSXiBtUCwrIUw3cN5ReV6vDjmlGiCqR6ND23eayS7XB/9P0r1KW923Bg2C4tennnzsE7sb874otKN4yE84Rm8BJV2kX4gBNBpRk/ePsOwFQiahYos59zqjbVN103+3JDoDGUkxd3PO58T/5cWuc6kDAIb6t/BX7CjS3WIQPH2xnR2oGnZtHjQFnGccLYk6W7ZxTkH0k8/ibIvxK9ev+n2rGHeK7TlYpUc/UsTNisF6MN71DUdAipEoaZkUDf8=","ak.pv":"68","ak.dpoabenc":""};if(""!==t)i["ak.ruds"]=t;var _={i:!1,av:function(e){var t="http.initiator";if(e&&(!e[t]||"spa_hard"===e[t]))i["ak.feo"]=void 0!==a.aFeoApplied?1:0,BOOMR.addVar(i)},rv:function(){var a=["ak.bpcip","ak.cport","ak.cr","ak.csrc","ak.gh","ak.ipv","ak.m","ak.n","ak.ol","ak.proto","ak.quicv","ak.tlsv","ak.0rtt","ak.r","ak.acc","ak.t"];BOOMR.removeVar(a)}};BOOMR.plugins.AK={akVars:i,akDNSPreFetchDomain:n,init:function(){if(!_.i){var a=BOOMR.subscribe;a("before_beacon",_.av,null,null),a("onbeacon",_.rv,null,null),_.i=!0}return this},is_complete:function(){return!0}}}}()}(window);</script> <iframe src="https://www.googletagmanager.com/ns.html?id=GTM-WRXS6TH" title="Intentionally Blank" aria-hidden="true" height="0" width="0" style="display:none;visibility:hidden"></iframe>
<script type="text/javascript" src="//www.salesforce.com/etc.clientlibs/foundation/clientlibs/shared.min.d8eee0685f08a5253a1d753a2619a08f.js"></script> <script type="text/javascript" src="//www.salesforce.com/etc.clientlibs/cq/personalization/clientlib/personalization/kernel.min.015ac4f9d569ca6cc01b4c370c725560.js"></script> <style> .content-container .target:first-child { min-height: 700px; } </style> <script type="text/javascript"> $CQ(function() { if (window.CQ_Analytics && CQ_Analytics.SegmentMgr) { CQ_Analytics.SegmentMgr.areSegmentsLoaded = true; CQ_Analytics.SegmentMgr.fireEvent('segmentsload'); } if ( CQ_Analytics && CQ_Analytics.CampaignMgr ) { var campaigns = []; CQ_Analytics.CampaignMgr.addInitProperty('campaigns', campaigns); CQ_Analytics.CampaignMgr.init(); } CQ_Analytics.SFDCSegmentUtils.init(); CQ_Analytics.kruxStore.init(); CQ_Analytics.SFDCSegmentUtils.init(); CQ_Analytics.DemandbaseStore.init(); if( CQ_Analytics && CQ_Analytics.PageDataMgr) { CQ_Analytics.PageDataMgr.loadInitProperties({ "hits": 0, "title": "Chatter - The Enterprise Social Network & Collaboration Software", "path": "/content/www/en_us/home/products/chatter/overview", "navTitle": "What is Chatter?", "tags": "", "description": "Increase productivity, innovation, and success with Salesforce Chatter. Chatter is the enterprise collaboration software solution and social network that connects every employee with files, data, and experts they need anywhere, anytime.", "sitesection": "en_us", "subsection": "home", "random": "0.11" }, true); } CQ_Analytics.Utils.isOptimizedCC = true; CQ_Analytics.ClientContextMgr.PATH = "\/etc\/clientcontext\/sfdc\u002Dwww"; CQ_Analytics.ClientContextMgr.loadConfig({"initializationEventTimer": 10}, true); }); </script>
Skip to content
Salesforce Toggle navigation

Home