MindMap Gallery Concurrent programming
This is a mind map about concurrent programming. In concurrent programming, a program can perform multiple tasks at the same time, thereby improving the execution efficiency and response speed of the program.
Edited at 2023-12-21 17:34:27This Valentine's Day brand marketing handbook provides businesses with five practical models, covering everything from creating offline experiences to driving online engagement. Whether you're a shopping mall, restaurant, or online brand, you'll find a suitable strategy: each model includes clear objectives and industry-specific guidelines, helping brands transform traffic into real sales and lasting emotional connections during this romantic season.
This Valentine's Day map illustrates love through 30 romantic possibilities, from the vintage charm of "handwritten love letters" to the urban landscape of "rooftop sunsets," from the tactile experience of a "pottery workshop" to the leisurely moments of "wine tasting at a vineyard"—offering a unique sense of occasion for every couple. Whether it's cozy, experiential, or luxurious, love always finds the most fitting expression. May you all find the perfect atmosphere for your love story.
The ice hockey schedule for the Milano Cortina 2026 Winter Olympics, featuring preliminary rounds, quarterfinals, and medal matches for both men's and women's tournaments from February 5–22. All game times are listed in Eastern Standard Time (EST).
This Valentine's Day brand marketing handbook provides businesses with five practical models, covering everything from creating offline experiences to driving online engagement. Whether you're a shopping mall, restaurant, or online brand, you'll find a suitable strategy: each model includes clear objectives and industry-specific guidelines, helping brands transform traffic into real sales and lasting emotional connections during this romantic season.
This Valentine's Day map illustrates love through 30 romantic possibilities, from the vintage charm of "handwritten love letters" to the urban landscape of "rooftop sunsets," from the tactile experience of a "pottery workshop" to the leisurely moments of "wine tasting at a vineyard"—offering a unique sense of occasion for every couple. Whether it's cozy, experiential, or luxurious, love always finds the most fitting expression. May you all find the perfect atmosphere for your love story.
The ice hockey schedule for the Milano Cortina 2026 Winter Olympics, featuring preliminary rounds, quarterfinals, and medal matches for both men's and women's tournaments from February 5–22. All game times are listed in Eastern Standard Time (EST).
Concurrent programming
Introduction
Three elements
atomicity
Inseparable, either operate it all or fail
Orderliness
Execute in order of code
visibility
Multiple threads access a variable, and if one of them is modified, other threads can immediately obtain the latest
Five major statuses of threads
create
new new thread
ready
Call the start method, not necessarily run, wait for the CPU to adjust
run
CPU adjusted
block
Sleep
die
Execution completed, or exception
Optimism and Pessimism Lock
optimism
Execute first every time
pessimistic
Lock first, block if not added
Thread collaboration
wait
notify
notifyAll
synchronized
Modification class
Decorate code blocks
Modification method
Modify static methods
CAS
Comparing the two numbers before and after the modification will cause an ABA problem
In JDK11, Compare And Set was handed over. Before that, it was Swap.
Thread Pool
Improve performance
Multithreading
1. Basic theory
feature
1. Regardless of whether it is concurrent or not, the main thread must be created at startup.
2. Threads in Java share all resources of the program, such as memory and open files.
3. Thread priority
1. Thread.MIN_PRIORITY (1)
2. Thread.NORM_PROTIRY(5) defaults to 5
3. Thread.MAX_PRIORITY (10) Between 1-10, 10 has the highest priority
4. You can create daemon and non-daemon threads
state
NEW: Thread has been created, but execution has not started yet
RUNNABLE: running
BLOCKED: Waiting for lock
WAITING: Waiting for another thread to act
TIME_WAITING: Waiting for another thread to act, but there is a time limit
Way
extends Thread
implements Runnable
Can return content
communication
Shared memory
Implement communication for reading and writing shared memory areas
signal
Counter, obtain the semaphore before access and release the semaphore after access
mutex lock
Only one thread can access shared resources at a time
condition variable
Waiting notification mechanism, wait if the conditions are not met, execute only if the conditions are met
pipeline
Memory-based communication, which can have parent-child inter-process communication, or communication between different threads in different processes
message queue
Inter-process communication mechanism, production-consumption model
Thread
Get Thread object information
getId() returns the object identifier, a positive integer, unique and cannot be changed
getName, SetName, allows you to get and set the name
getPriority, setPriority get and set priority
isDaemon, setDaemon obtain and establish daemon conditions
getState() gets the object state
interrupt() interrupts the target thread
interrupted() determines whether the thread is interrupted and clears it
isInterrupted() determines whether it was interrupted and does not clear it
sleep(long ms) pause ms time
join() will only continue the main thread after this thread has finished executing.
yield() pauses the thread and lets the CPU be allocated to other threads
setUncaughtExceptionHandler() Set exception handler
currentThead() returns the Thread object
synchronized
Lightweight blocking & heavyweight blocking
Interrupted
Function: interrupt thread Only works for lightweight
Why is this exception thrown?
Throws, terminates when an exception is thrown in the thread method
Otherwise, do not throw
isInterrupted() returns boolean current interruption status
daemon
When a thread is set to a daemon thread, the condition that determines whether the thread is closed is whether the non-daemon thread is closed.
closure
thread.stop() is obsolete
Set the variables, judge the variables each time the loop runs, and control the variables to close
2. Core concepts of concurrency
Concurrency and Parallelism
Concurrency: CPU slices switch quickly
Parallel: multiple tasks are being executed at one time
Thread switching means concurrency, no switching, parallelism
Synchronize
immutable object
String
Atomic operations and atomic variables
Shared memory messaging
Messaging (synchronous, asynchronous)
3. Concurrency issues
deadlock
mutually exclusive
Possess and wait for resources
inalienable
loop wait
livelock
lack of resources
priority inversion
4. JMM memory model instruction rearrangement
1. Reorder memory visibility
Editor reflow
CPU instruction rearrangement
CPU memory rearrangement
memory barrier
happen-before memory visibility
volatile
Function
Atomicity of 64-bit writes
memory visibility
Disable direct connection rearrangement
JUC
concurrent container
BlockingQueue single chain
method
remove() non-blocking
take() blocks
poll() blocks
ArrayBlockingQueue parsing Ring queue implemented by array
Attributes
Object[] items container
int takeIndex queue head pointer
int putIndex queue tail pointer
int count length
ReentrantLock lock lock
Condition notEmpty is not empty
Condition notFull Not full
method
put
Block when queue is full
The put is successful and the non-empty condition is notified
take
Queue is empty
The take is successful and the conditions are not satisfied.
LinkedBlockingQueue parsing One-way linked list blocking queue
Attributes
int capactiy maximum capacity
count = new AtomicInteger(0) atomic variable
Node<E> head head
Node<E> last tail
Two locks, two conditions
Constructor
Specify capacity, do not specify Integer.MAX_VALUE
difference
1. Improve concurrency. Two locks control the head and tail. You can only put one in and take out one at a time, but you can put and take out at the same time. For cont variables, it is also an atomic class
2. Each has a lock and needs to call the other party's condition and signal to notify consumers and producers.
PriorityBlockQueue parsing Dequeue by priority
Attributes
Object[] qieie container
int size size
ReentrantLock lock; lock
Condition notEmpty; not full
Constructor
Do not specify the default length of 11
Features
Not full
Insertion does not block and can be expanded
DelayQueue parsing Delay queue, whoever has less time will get out first
Attributes
lock = new ReentrantLock(); lock
available = lock.newCondition(); non-empty condition
PriorityQueue<E> q priority queue
SynchronousQueue parsing The special BlockingQueue has no capacity
Constructor (boolean fair)
Pass in true fair FIFO
new TransferQueue<E>()
Pass in false unfair
TransferStack<E>();
method
transfer(E e,boolean timed,long nanos) e is taken as null, non-null interpolation
Features
Assuming 3 threads put, all 3 threads will block until other threads call take 3 times.
BlockingDeque deque
method
putFirst
Lputast
takeFirst
takeLast
throw InterruptedExcpetion
inherit
BlockingQueue
Deque
LinkedBlockingDeque parsing Doubly linked list
Attributes
final int capacity fixed capacity
int count length
class Node<E> Doubly linked list
Node<E> first header Node<E> last tail
One lock, two conditions
new ReentrantLock();
notEmpty = lock.netCondition();
notFull = lock.newCondition();
Write, read mutually exclusive
CopyOnWrite Copy and write
CopyOnWriteArrayList parsing underlying array
method
All read methods have no locks
There is synchronized in the add() code block Make a copy and add it
CopyOnWriteArraySet parsing Underlying CopyOnWriteArrayList (matryoshka)
If the element exists, it will not be added.
ConcurrentLinkedQueue/Deque Based on CAS blocking queue
ConcurrentHashMap Concurrent Map
init: You can calculate the initial length, but it may not be accurate to the value you give. It will calculate and keep the integer power of 2. During the calculation process, if the correct value is not calculated, Thread.yield spins
put: 4 branches
1. Initialize each bucket
2. Initialize the elements of each bucket
3. Expansion
When the array length is greater than 64, the linked list will be considered to be converted into a red-black tree, otherwise it will be directly expanded.
First create a new HashMap of 2 times the size
question
1. During the migration process, when writing to the old hashMap, a ForwardingNode forwarding node will be created to ensure that there will be no reference failure.
4. Add elements
1. Lock
Determine whether the linked list is a red-black tree and add elements to them respectively.
Finally, if it is a linked list, when it exceeds bigCount 8, it becomes a red-black tree.
ConcurrentSkipListMap/Set HashMap of ordered keys
The three-layer linked list is somewhat similar to the dichotomy method and skip list query. Because it is ordered, it can locate the interval range.
You also need to determine whether to delete before operating, because there may be other threads also operating.
Synchronization tools
Semaphore semaphore Based on AQS
Constructor new Semaphore(5, false); Number of threads and whether it is fair (fairness and high efficiency)
acquire() grabs the lock and stops if it cannot be grabbed
release(int) returns the lock, defaults to 1 if not passed
CountDownLatch counter Based on AQS
Constructor new CountDownLatch(5); Threads
countDown() decrements the counter by one. When it is less than 0, it can also be decremented.
await() passes when =0, does not interrupt when it reaches 0
CyclicBarrier Synchronizer
Constructor new CyclicBarrier(5, () -> { }); The void method will not be executed until all threads have finished.
await() ends the thread and the underlying reentry lock, determines whether everything is completed, and triggers the void method
Exchanger thread exchanges data
If no other thread calls exchange, the thread blocks until another thread calls exchange.
exchanger.exchange("Exchange data 1")
Phaser instead of CountDownLatch
arrive()
awaitAdanc()
New features: 1. Dynamically adjust the number of threads
register() register
bulkRegister(int) register multiple
arriveAndDeregister() deregisters
New features: 2. Layer Phaser
Not based on AQS, but has AQS features
state variable
CAS operations
blocking queue
Atomic
AtomicInteger
AtomicLong
AtomicBoolean
Convert to Integer and then operate
AtomicBoolean
Convert to Long and then operate
AtomicReference
The bottom layer and - are not used synchronized, but U.getAndAddInt, CAS operation, first check and then modify and spin
Solve the ABA problem of CAS
AtomicStampedReference plus version, Integer
AtomicMarkableReference plus mark, boolean
Lock and Condition
mutex lock
Lock reentrancy
class inheritance hierarchy
Fairness lines and unfairness of locks
Fair queuing, the advantages are equal to the rain and dew
It’s unfair to rely on performance, but the advantage is to reduce thread switching.
Fair and Unfair Lock Differences
Analysis of ReentrantLock based on AQS
Unfair, no queuing, state == 0, directly set the thread as an exclusive thread, and set state at the same time, when != 0, but the exclusive thread is the current thread, set state, otherwise return false to obtain failure
Fair, state == 0, and there are no waiting threads in the queue. Set the current thread as an exclusive thread and set the value of state. If the exclusive thread is himself, set the state value. Otherwise, return false and the acquisition fails.
Blocking queue and wake-up mechanism
The bottom layer calls the park() method to block itself, and other threads call unpark() to wake up.
Unlock analytics
Unlock it regardless of whether it is fair or unfair
1. Determine whether the exclusive lock belongs to your own thread. If it does not throw an exception, because only the one who acquires the lock can release it.
2. Set the state again. There is no CAS here because it is single-threaded.
3. Finally unpark (next node) wakes up the next node thread
lockInterruptibly()
Cannot be interrupted. If it is judged to be interrupted by other threads, an exception will be thrown directly.
tryLock()
Based on unfair lock tryAcquire(), for state CAS operation, if the lock is obtained, the critical section will be executed. If not, it will return false and will not block.
read-write lock
Compared with mutual exclusion locks, read-write locks are for reading, not mutual exclusion.
inheritance hierarchy
ReadWriteLock interface
Lock readLock()
Lock writeLock()
Basic implementation of read-write lock
Writing is mutually exclusive, reading is not mutually exclusive
ReadLock()
If the write lock process is occupied or the current thread is not an exclusive thread, the lock grab fails.
If the lock value reaches the limit, exception
CAS sets state value
If state=0, the current thread is the first reading thread
If the writing thread is the current thread, re-entry the read lock
If it is not the current thread, obtain the number of read locks of the current thread from ThreadLock and set the number of read locks held by the current thread.
Condition
The function is similar to wati notify
Attributes
await thread hangs until it receives a signal or is interrupted
awaitUnintterruptibly() is the same as await, but will not respond to interrupt requests
signal wakes up a waiting condition thread
ginalAll wakes up all waiting condition threads
awaitNanos(long time) hangs until timeout
await(long time,TimeUnit unit) specifies time parameters and suspends
await() method
1. When a thread calls await, it must get the lock first, so there is no need to execute CAS when executing addConditionWaiter. The thread is inherently safe.
2. Before the thread executes the wait operation, the lock must be released first, otherwise a deadlock will occur. This is the same as the cooperation mechanism of wait, nodity and synchronized.
3. After the thread wakes up from wait, it must use acquireQueued to reacquire the lock.
4. Check whether an interrupt signal is received during park.
Other threads unpark
Thread interrupt received
Introduced in StampedLock 1.8
the difference
ReentrantLock read-read mutual exclusion, read-write mutual exclusion, write-write mutual exclusion
ReentrantReadWriteLock Reading and reading are not mutually exclusive, reading and writing are mutually exclusive, writing and writing are mutually exclusive
StampedLock Reading and reading are not mutually exclusive, reading and writing are not mutually exclusive, writing and writing are mutually exclusive
Thread pool and Future
Thread Pool
Implementation principle
think
1. How long is the queue? If it is unbounded, the memory will be exhausted. What to do if the queue is full?
2. Are the threads in the thread pool fixed or dynamically changing?
3. Each time a task is submitted, should it be queued or threaded?
4. When there is no task, does the thread sleep? Or blocked? If blocked, how to wake up?
Solution
1. There is no need to block, so blocking wake-up is not considered. If the queue is empty, it will go to sleep. When it wakes up, it will check whether there are tasks in the queue, and the cycle will continue.
2. Do not use a blocking queue, but implement blocking wake-up outside the queue.
3. Use blocking queue
inheritance system
ThreadPoolExecutor
core variables
1. AtomicInterger thread pool status, number of threads 2 values
A total of 32 bits, the upper 3 bits represent the storage status, and the last 29 bits represent the number of threads.
state
RUNNING -1
SHUTDOWN 0
STOP 1
TIDYING 2
TERMINATED 3
2. BlockingQueue stores task blocking queue
3. ReentratLock is a reentrant lock
4. HashSet<Worker> A worker represents a thread. Worker inherits AQS and is a lock itself.
Construction parameters
1. The number of threads corePoolSize always maintains
2. maxPoolSize expands the thread to this value when corePoolSize and the queue are full. It can be said that the waiting thread
3. The length of time to wait for destruction in keepAliveTime maxPoolSize
4. Queue type used by blockingQUeue thread pool
5. threadFactory thread factory
6. RejectedExceptionHandler If it is full, reject the policy
4 rejection strategies
defaultHandler default AbortPolicy() throws exception
The CallerRunspolicy thread pool cannot be used, so I have to do it myself.
DiscardPolicy Discard it without anyone noticing
DiscardOldestPolicy deletes the oldest task
Close thread gracefully
method
exectuor.shutdown() short-term idle thread, does not clear the queue
exectuor.shudtownNow() Fuck it all
Task submission process
method
execute(Runable)
If there is still space, execute it. If there is no space, put it in the queue. Then check whether there is any space in the queue. If there is any space, execute it. Finally, put it in maxPoolSize. If there is no space, then reject the policy.
Start a new thread addWorked(Runable, boolean)
boolean is true corePoolSize is the upper limit
is false maxPoolSize is the upper limit
Executors tool class
newSingleThreadExecutor single thread pool
newFixedThreadPool(int) Fixed large thread pool
newCachedThreadPool receives a request and creates a thread
ScheduledExecutorService has a scheduled single thread pool
newSchedulerdThreadPool(int) Multi-threads are scheduled
summary
Alibaba manual does not allow this kind of play to avoid risks
ScheduledTrheadPoolExecutor
Realizes the execution of tasks according to time scheduling
Two major categories
Delay task execution
schedule(....)
Periodic execution of tasks
scheduledAtFixedRate(...)
Fixed frequency, it doesn’t matter how long your task takes to execute
scheduledWithFixedDelay(...)
According to the time interval, the next execution time depends on how long the task execution interval is
Principle: Inherits ThreadPoolExecutor and relies on DelayQueue, which is a type of BlockingQUeue, binary heap
CompletableFuture asynchronous programming tool
Function: Give the task to the thread to work, the Main thread get() blocks, waits for the working thread to complete, and then outputs the content of the working thread Function: Interaction between worker thread and Main thread
Before and after calling get(), print future and there will be. The difference between Not competent and Completed norally
runAsync, supplyAsnc
runAsync assigns tasks to futrue asynchronously, with no return value
supplyAsync blocks asynchronously and has a return value
thenRun, thenAccept, thenApply
future.thenRun(new Runnable() Then do the next thing and you can’t get the return value.
tthenAccept(new Consumer<String>() You can get the return value String type
thenApply(new Function<String, Integer>() The first accepts parameters and the second returns parameters
four archetypes
Runable - runAsync, thenRun
Consumer-thenAccept
Supplier - supplierAsync
Function-thenApply
thenCompose, thenCombine
In the case of thenApply multiple nesting dolls above, the value is obtained in one go
allOf, anyOf
allOf all completed, then return
anyOf returns only one
Internal Principle Based on ForkJoinPool
RecurisiveTask has a return value
RecurisveAction has no return value
core data structures
ring queue
Thread pool status ctl
Divided into 5 parts, the value of each part represents the status
Blocking stack Trebier Stack implements multiple queue blocking wake-ups
ctl variable initialization, judged by 5 partial states
Worker blocking wake-up mechanism
Block-push
Wake-up
Task submission
Internal commit push
external commit
Work-stealing algorithm optimistic locking
join, fork
fork puts tasks
If it is a worker thread, put it in the local queue
Client thread put into public queue
join nested blocking
How to wake up? Loop judgment status call, the current thread is allowed to complete, and other threads need to be notified
graceful closing
Thread pool status will be determined during thread permission.
method
Graphical: tasks are network-shaped, directed and acyclic
Multi-threaded design patterns
Single Thread Execution mode
concept
thread
critical section
when to use
1. Multi-threading
2. The status may change
3. Need to ensure safety
Immutable mode
No shared resources, no locks required
Guarded suspension mode
Request wait, wait and block before returning successfully
Balking mode
Stop and return. There is no blocking here. If it is not allowed, just stop.
You can also wait for a period of time and return directly after the waiting time is exceeded.
Notify of timeout via exception
Future.get
Exchanger.exchange
Cyclicarrier.await
CountDownLatch.await
Notify timeout via return value
BlockingQueue.poll returns null
When Semaphore.tryAcquire is false
When Lock.tryLock is false
Producer-Consumer pattern
producer consumer model If there is only one, we can also call it pipe mode.
Read-Write Lock mode
Guarded suspension mode adopted
Main points
Improve program performance by preventing conflicts between read and write operation threads
Suitable for situations where the read operation load is large
Suitable for writing less and reading more
Juc medium
ReentrantReadWriteLock, ReadWriteLock
Thread-Per-Message pattern
Hand over the work content to the thread, and the main thread returns
Worker Thread mode
Thread Pool
advantage
Improve throughput
control capacity
call execution separation
Polymorphic Request role
Future mode
Delay waiting for results
Basic principles of locks
1. A state variable is required to mark the status of the lock, with at least 2 values.
When state >1, it means that the lock supports reentrancy
When state = 0 means no thread holds the lock
When state = 1 there is a lock
2. Need to record which thread currently holds the lock
3. The underlying support is required to block and wake up a thread.
LockSupport tool class
void park()
The current thread calls park and the thread blocks.
void unpark()
Other threads call unpark(Thread) to wake up the blocked thread
4. The queue needs to maintain all blocked threads. This queue must be a thread-safe lock-free queue and CAS must be used.
Blocking queues are the core of AQS
The bottom layer is implemented through a doubly linked list and two pointers
There is a return value
No return value
There are ginseng
No ginseng