MindMap Gallery JUC
This is a mind map about JUC, including inter-thread communication, Summary of knowledge on concurrent container classes, Callable interfaces, blocking queues, ThreadPool thread pools, etc.
Edited at 2023-12-20 18:03:44Discover how Aeon can navigate the competitive online landscape with a strategic SWOT analysis. This comprehensive overview highlights Aeon’s strengths, such as its strong brand recognition, omnichannel capabilities, and customer loyalty programs, alongside its weaknesses, including digital maturity gaps and cost structure challenges. Opportunities for growth include enhancing e-commerce competitiveness and leveraging data-driven strategies, while threats from online-first players and market dynamics require attention. Explore how Aeon can strengthen its market position through innovation and customer-centric approaches in the ever-evolving retail environment.
Discover how Aeon effectively tailors its offerings to meet the diverse needs of family-oriented consumers through a comprehensive Segmentation, Targeting, and Positioning (STP) analysis. Our approach begins with demographic segmentation, examining family life stages, household sizes, income levels, and parent age bands to identify distinct consumer groups. Geographic segmentation highlights store catchment types and community characteristics, while psychographic segmentation delves into family values and lifestyle orientations. Behavioral segmentation focuses on shopping missions, price sensitivity, and channel preferences. Finally, needs-based segmentation reveals core family needs related to value and budget considerations. Join us as we explore these insights to enhance family shopping experiences at Aeon.
Discover the dynamics of sneaker transactions with our Kream Sneaker Consumption Scene Analysis Template. This comprehensive framework aims to visualize the purchasing and consumption journeys of sneakers, identifying key demand drivers and obstacles. It covers user behavior within Kream and external influences, targeting various sneaker categories over specific timeframes and regions. The analysis defines user segments, including collectors, resellers, sneakerheads, casual trend followers, and gift purchasers, each with unique values and KPIs. It outlines the consumption journey from awareness to resale, highlighting critical touchpoints such as search, purchase, inspection, and sharing experiences. Key performance indicators are established to measure engagement and satisfaction throughout the process. Join us in exploring the intricate world of sneaker trading!
Discover how Aeon can navigate the competitive online landscape with a strategic SWOT analysis. This comprehensive overview highlights Aeon’s strengths, such as its strong brand recognition, omnichannel capabilities, and customer loyalty programs, alongside its weaknesses, including digital maturity gaps and cost structure challenges. Opportunities for growth include enhancing e-commerce competitiveness and leveraging data-driven strategies, while threats from online-first players and market dynamics require attention. Explore how Aeon can strengthen its market position through innovation and customer-centric approaches in the ever-evolving retail environment.
Discover how Aeon effectively tailors its offerings to meet the diverse needs of family-oriented consumers through a comprehensive Segmentation, Targeting, and Positioning (STP) analysis. Our approach begins with demographic segmentation, examining family life stages, household sizes, income levels, and parent age bands to identify distinct consumer groups. Geographic segmentation highlights store catchment types and community characteristics, while psychographic segmentation delves into family values and lifestyle orientations. Behavioral segmentation focuses on shopping missions, price sensitivity, and channel preferences. Finally, needs-based segmentation reveals core family needs related to value and budget considerations. Join us as we explore these insights to enhance family shopping experiences at Aeon.
Discover the dynamics of sneaker transactions with our Kream Sneaker Consumption Scene Analysis Template. This comprehensive framework aims to visualize the purchasing and consumption journeys of sneakers, identifying key demand drivers and obstacles. It covers user behavior within Kream and external influences, targeting various sneaker categories over specific timeframes and regions. The analysis defines user segments, including collectors, resellers, sneakerheads, casual trend followers, and gift purchasers, each with unique values and KPIs. It outlines the consumption journey from awareness to resale, highlighting critical touchpoints such as search, purchase, inspection, and sharing experiences. Key performance indicators are established to measure engagement and satisfaction throughout the process. Join us in exploring the intricate world of sneaker trading!
JUC
synchronized
lock object
Common synchronization methods
Lock object: method call object
Static synchronization method
Lock object: bytecode pair of the current class
synchronized method block
Lock object: bytecode object
Lock object: implementation class object
Controlled by JVM
synchronized is also a reentrant lock
When the same thread acquires the lock in the outer method, the inner method that enters the thread will automatically acquire the lock. ReentrantLock and synchronized in Java are both reentrant locks. One advantage of reentrant locks is that they can avoid deadlocks to a certain extent.
Lock
ReentrantLock reentrant lock
new ReentrantLock() defaults to unfair lock
new ReentrantLock (true) Passing in true is a fair lock
Fair lock and unfair lock
Fair lock: The thread at the front of the waiting queue gets the lock first.
Unfair lock: a random thread gets the lock, causing thread starvation problem
Due to unfair locks, a certain thread cannot be executed consistently, and other threads grab the lock every time.
Wait for a limited time
public boolean tryLock (long timeout, TimeUnit unit)
After waiting for a period of time, try to acquire the lock. If it is not acquired, it returns false, and if it is acquired, it returns true (waiting continuously to grab the lock will block the thread until the lock is acquired and the waiting time has expired)
The difference between ReentrantLock and synchronized
same
synchronized and ReentrantLock are both exclusive locks
synchronized and ReentrantLock are both reentrant locks
synchronized and ReentrantLock are both pessimistic locks
different
Synchronized locking and unlocking are completed by jvm and cannot be controlled by the user, while ReentrantLock locking and unlocking are controlled by the user.
ReentrantLock is a reentrant lock. You need to manually release the lock several times after locking it, but sync is completed automatically.
synchronized cannot respond to interrupts. It will wait until it can get the lock. However, ReentrantLock can use tryLock to respond to interrupts. If it cannot get the lock for a period of time, it will not take it. There is no need to block and wait.
ReentrantReadWriteLock read-write lock
reentrantReadWriteLock.writeLock() acquires the read lock
reentrantReadWriteLock.readLock() acquires the write lock
A read-write lock allows multiple reading threads to access it at the same time, but when a writing thread accesses it, all reading threads and other writing threads will be blocked.
Writing cannot be done concurrently
Reading and writing are not concurrent
Reading can be done concurrently
Lock downgrade
Lock downgrade means downgrading from a write lock to a read lock. When the current thread owns the write lock, the process of acquiring the read lock again and subsequently releasing the write lock is lock downgrading.
Communication between threads
synchronized
wait
lock object .wait ()
Release the lock, transfer the CPU execution rights, wait to be awakened to reacquire the lock, and then execute the code at wait.
wake
lock object . notify ( )
Randomly wake up a waiting thread, the awakened thread also needs to compete for the lock
Lock object . notifyAll ( )
Wake up all waiting threads
ReentrantLock
wait
reentrantLock . newCondition ( ) . await ( )
Multiple lock conditions can be created
Condition aCondition =reentrantLock.newCondition();
Condition bCondition =reentrantLock.newCondition();
Condition cCondition =reentrantLock.newCondition();
wake
condition. signal ( )
Wake up a thread waiting to wake up for the specified lock condition object
condition. signalAll ( )
Wake up all waiting threads of the specified lock condition object.
More detailed than synchronized, you can specify a waiting thread to wake up
false arousal
The if condition of the awakened thread is no longer satisfied, but because it is awakened, it will be awakened at wait after grabbing the lock resource, causing the thread execution order to be confused.
The solution is to replace the if condition with where. After being awakened and grabbing the lock, the condition still needs to be judged.
This will create a state of suspended animation in which all threads are waiting.
Solution: Use notifyAll or signalAll to wake up all threads. Threads that do not meet the conditions will still enter the waiting state.
Concurrent container class
Collections tool class
synchronizedList
synchronizedMap
synchronizedCollection
synchronizedSet
synchronizedSortedMap
synchronizedSortedSet
Convert thread-unsafe containers to thread-safe containers including list, set, and map
CopyOnWrite
Copy-on-write containers
When we add elements to a container, we do not add them directly to the current container. Instead, we first copy the current container and create a new container. Then we add elements to the new container. After adding the elements, we copy the original container. The reference points to the new container
The CopyOnWrite container is also an idea of separation of reading and writing. Reading and writing are different containers.
Concurrently read the CopyOnWrite container without locking, and lock when writing
Auxiliary class
CountDownLatch (countdown counter)
new CountDownLatch(int count) instantiates a counter with an initial value of count
Each time countDown() is called, the counter decreases by one.
await() waits for execution when the counter decreases to 0
CyclicBarrier (Cyclic Barrier)
CyclicBarrier(int parties, Runnable barrierAction) creates a CyclicBarrier instance. parties specifies the number of threads participating in waiting for each other. barrierAction is an optional Runnable command that only runs once at each barrier point and can share state before executing subsequent business. This operation is performed by the last thread entering the barrier point
CyclicBarrier(int parties) creates a CyclicBarrier instance, parties specifies the number of threads participating in waiting for each other.
When the await() method is called, it indicates that the current thread has reached the barrier point. The current thread is blocked and enters sleep state. The current thread will not be awakened until all threads reach the barrier point.
Semaphore (semaphore)
Semaphore can control the number of threads accessing at the same time. Assuming the number of resources is N, each thread can obtain a resource. However, when the resource is allocated, subsequent threads need to block and wait until the thread that previously held the resource releases the resource. continue.
public Semaphore(int permits) //Construction method, permits refers to the number of resources (semaphore)
public void acquire() throws InterruptedException // Occupies resources. When a thread calls the acquire operation, it either successfully acquires the semaphore (the semaphore is decremented by 1), or waits until a thread releases the semaphore or times out.
public void release() // (release) will actually increase the value of the semaphore by 1, and then wake up the waiting thread
Callable interface
use
Create a class-based callable interface and override the call method
Create a FutureTask, call the constructor with parameters, and pass in the callable implementation class object
new FutureTask<String>(callable)
Create a thread class object new Thread(futureTask).start() and call the start method
FutureTask.get() gets thread return information
Precautions for use
FutureTask.get() will block the current thread. It is recommended to put it at the end.
Calculate only once, FutureTask will reuse previously calculated results.
The difference between callable interface and runnable interface
The specific methods are different, one is the call method and the other is the run method.
Runnable has no return value, but callable can get the return value object
The run method cannot throw exceptions and can only consume exceptions within the run method, while the call method can throw exceptions.
Four ways to get multithreading
Inherit Thread, rewrite the run method, and call start to execute the thread
Implement the Runnable interface, rewrite the run method, and call the start execution thread
Implement the Callable interface, rewrite the call method, create a FutureTask object, and execute the start method of Thread to execute the thread.
Create a thread pool and execute the submit method
blocking queue
BlockingQueue is a blocking queue
Implementation class
ArrayBlockingQueue: A bounded blocking queue composed of an array structure
LinkedBlockingQueue: A bounded (but the default size is integer.MAX_VALUE) blocking queue composed of a linked list structure
PriorityBlockingQueue: Unbounded blocking queue that supports priority sorting
DelayQueue: Delayed unbounded blocking queue implemented using priority queue
SynchronousQueue: A blocking queue that does not store elements, that is, a queue with a single element
LinkedTransferQueue: unbounded blocking queue composed of linked list
LinkedBlockingDeque: a two-way blocking queue composed of a linked list
Four groups of methods
throw an exception
insert
add(e)
When the blocking queue is full, adding and inserting elements into the queue will throw IllegalStateException:Queue full
Remove
remove()
When the blocking queue is empty, removing elements from the queue will throw NoSuchElementException.
examine
element()
When the blocking queue is empty, calling element to check the element will throw NoSuchElementException.
special value
insert
offer(e)
Insertion method, success true, failure false
Remove
poll()
Remove method, successfully returns the head element of the queue. If there is no element in the queue, it returns null and deletes the element.
examine
peek()
Check the method and successfully return the head element in the queue. If null is not returned, the element will not be deleted.
block
insert
put(e)
When the blocking queue is full and you put elements into the queue, the queue will block the producer thread until it puts the data or exits in response to an interrupt.
Remove
take()
When the blocking queue is empty, and then take elements from the queue, the queue will block the consumer thread until the queue is available.
examine
unavailable
time out
insert
offer(e, time, unit)
If the attempted operation cannot be performed immediately, the method call will block until it can be performed, but the wait time will not exceed the given value. Returns a specific value to tell whether the operation was successful (typically true / false).
Remove
poll(time, unit)
examine
unavailable
ThreadPool thread pool
Thread pool tool class
Executors
Executors.newFixedThreadPool()
Executors.newSingleThreadExecutor()
Executors.newCachedThreadPool();
Executors.newScheduledThreadPool();
Executors.newSingleThreadScheduledExecutor();
The threads created by the thread tool class will cause a large number of threads to accumulate, leading to OOM
Custom thread pool
new ThreadPoolExecutor
corePoolSize number of core threads
maximumPoolSize maximum number of threads
keepAliveTime non-core thread survival time
TimeUnit survival time unit
BlockingQueue thread waiting queue
The first five parameters can also customize a thread pool
ThreadFactory thread factory
RejectedExecutionHandler rejection strategy
The queue is full, and non-core threads are also full. A rejection strategy will be used. Four rejection strategies.
AbortPolicy
Default rejection policy, throw exception directly
callerRunsPolicy
Handed over to the thread in the current thread pool for execution
DiscardPolicy
Discard it directly without processing or throwing an exception.
DiscardOldestPolicy
Discard the oldest thread in the queue, which is the frontmost thread in the queue, and then add the new thread to it.
Thread pool executes thread tasks
execute ( )
Only thread tasks that implement the Runnable interface can be passed in
submit ( )
Thread tasks that implement the Callable interface can be passed in
The number of newly created thread pool threads is 0. Once the core thread is created, it will be retained forever.
The underlying principles of multi-threading and high concurrency
JVM memory model
Memory partitioning
main memory
All variables are saved
working memory
Each thread has its own working memory, which is exclusive to the thread and stores a copy of the variables used by the thread (a copy of the main memory shared variables). Working memory is responsible for interacting with threads and also with main memory.
shared variables
If a variable is used by multiple threads, then this variable will keep a copy in the working memory of each thread. This type of variable is a shared variable.
Three major characteristics of the memory model
atomicity
That is, indivisible, such as a = 1 1 (atomicity) a (non-atomicity)
synchronized
Atomic class under the java.util.concurrent package
visibility
Each thread has its own working memory, so when a thread modifies a variable, other threads may not be able to observe that the variable has been modified.
final
volatile
synchronized
Orderliness
Java will reorder some instructions
volatile
synchronized
volatile keyword
Function in multi-threaded environment
visibility
This variable is guaranteed to be visible to all threads.
Orderliness
Disable instruction reordering optimization. Variables with volatile modifications perform an additional "load addl $0x0, (%esp)" operation after assignment. This operation is equivalent to a memory barrier.
No locking operation is performed when accessing volatile variables, so the execution thread will not be blocked. Therefore, volatile variables are a more lightweight synchronization mechanism than the sychronized keyword. It does not solve the atomicity problem
volatile principle
visibility
When a shared variable is modified volatile, it is guaranteed that each thread will immediately synchronize the modified value of the variable back to the main memory. When other threads need to read the variable, they will read the latest variable value.
How shared variables modified by volatile work
When a thread operates on a variable, it will read it from the main memory into its own working memory. When the thread modifies the variable, the copies of the variable in other threads that have read the variable will become invalid, so that other threads use the variable. When you find that it has expired, you go back to the main memory to get the value of the variable again, so that you get the latest value.
MESI cache coherence protocol
cache line
The smallest unit of storage that can be allocated in the CPU cache. Variables in the cache are stored in cache lines.
The core idea of MESI is that when the CPU writes to a variable and finds that the variable is a shared variable, it will notify other CPUs to set the cache line of the variable to an invalid state. When other CPUs find that the cache line of this variable is invalid when operating a variable, they will re-read the latest variable from the main memory.
Orderliness
Prohibit instruction reordering by setting memory barriers
write memory barrier
Read memory barrier
Universal memory barrier
CAS
explain
Compare and Swap. Compare and swap means CAS is an optimistic locking algorithm that solves multi-thread concurrency security issues.
Basic parameters
Memory address A
old value B
new value C
Its function is to compare the contents of the specified memory address A with the given old value B. If they are equal, replace its contents with the new value C provided in the instruction; if they are not equal, the update fails.
AQS
AbstractQueuedSynchronizer Abstract queue synchronizer is referred to as AQS. It is the basic component (framework) for implementing synchronizers.
Lock
Semaphore
CountDownLatch
CyclicBarrier
Achieved through AQS. The specific usage is to implement its template method by inheriting AQS, and then use the subclass as the internal class of the synchronization component
Main components
The shared resource variable state is 0 and the lock is available. If it is 1, it means that the lock is occupied.
FIFO (first-in-first-out) thread wait queue
Ideas for implementing locks based on AQS
tryAcquire(int): exclusive mode. Try to obtain the resource, return true if successful, false if failed
tryRelease(int): exclusive mode. Try to release the resource, returning true if successful, false if failed.
tryAcquireShared(int): sharing method. Try to get resources. A negative number indicates failure; 0 indicates success, but there are no remaining available resources; a positive number indicates success, and there are remaining resources.
tryReleaseShared(int): Sharing mode. Try to release the resource. If it is allowed to wake up the subsequent waiting node after the release, it returns true, otherwise it returns false.
sHeldExclusively(): Whether the thread is occupying resources exclusively. Only when condition is used do you need to implement it.
AQS locking, unlocking and waiting implementation principles
Locking and unlocking are actually AQS using CAS to modify its state attributes.
The wake queue is called Unsafe's park() and unpark()
NonfairSync unfair lock
1. Come up with compareAndSetState and try it out.
2. Unsuccessful acquisition acquire(1);
1. tryAcquire: grab first if (compareAndSetState(0, acquires)) grab directly
2. acquireQueued: failed to join the team
FairSync fair lock
acquire(1); Because there is no random grabbing due to fair locking, acquire(1) is used for queuing.
1. tryAcquire; if (!hasQueuedPredecessors() && compareAndSetState(0, acquires)) queue first
2. acquireQueued:
Lock upgrade and downgrade
Downgrade
Lock downgrade means downgrading from a write lock to a read lock. When the current thread owns the write lock, the process of acquiring the read lock again and subsequently releasing the write lock is lock downgrading.
Upgrade (only for synchronized CPU efficiency slows down with lock upgrade)
1. There is no thread competition, and the lock mark bit in the object header is marked as a biased lock.
Prefers the lock of the first thread to obtain it. The lock will not be released after the synchronization task is completed. If it is used for the second time, it is still the current thread and there is no need to hold the lock. If the Mark word status of the object header is 01, And it is a biased lock state. Determine whether the recorded Thread id is the current thread ID. If so, when this thread enters the synchronization block in the future, CAS is not required for locking.
2. If there is thread competition, upgrade the biased lock to a lightweight lock
Lightweight lock: JVM uses while(true) [spin] to obtain the lock. The purpose is for the CPU to quickly grab a lock.
Thread spin lock grabbing
3. If a thread spins ten times without grabbing the lock, it will be upgraded to a heavyweight lock.
Heavyweight lock: JVM uses the thread waiting wake-up mechanism to grab the lock. The JVM will not use wait and notify mechanisms unless absolutely necessary. Threads are paused, switched and resumed very slowly.
The thread hangs directly waiting to be awakened
How to plan the number of threads
Core threads = number of CPUs * 2, non-core threads = number of CPUs * 4
bus storm
Volatile is written too much. There are too many threads, and the CPU is notified to modify its cached data.
Waiting for processing
Synchronized spin locks, bias locks, lightweight locks, and heavyweight locks are introduced and contacted respectively.