Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::internal::generic_scheduler Class Referenceabstract

Work stealing task scheduler. More...

#include <scheduler.h>

Inheritance diagram for tbb::internal::generic_scheduler:
Collaboration diagram for tbb::internal::generic_scheduler:

Public Member Functions

bool is_task_pool_published () const
 
bool is_local_task_pool_quiescent () const
 
bool is_quiescent_local_task_pool_empty () const
 
bool is_quiescent_local_task_pool_reset () const
 
void attach_mailbox (affinity_id id)
 
void init_stack_info ()
 Sets up the data necessary for the stealing limiting heuristics. More...
 
bool can_steal ()
 Returns true if stealing is allowed. More...
 
void publish_task_pool ()
 Used by workers to enter the task pool. More...
 
void leave_task_pool ()
 Leave the task pool. More...
 
void reset_task_pool_and_leave ()
 Resets head and tail indices to 0, and leaves task pool. More...
 
task ** lock_task_pool (arena_slot *victim_arena_slot) const
 Locks victim's task pool, and returns pointer to it. The pointer can be NULL. More...
 
void unlock_task_pool (arena_slot *victim_arena_slot, task **victim_task_pool) const
 Unlocks victim's task pool. More...
 
void acquire_task_pool () const
 Locks the local task pool. More...
 
void release_task_pool () const
 Unlocks the local task pool. More...
 
taskprepare_for_spawning (task *t)
 Checks if t is affinitized to another thread, and if so, bundles it as proxy. More...
 
void commit_spawned_tasks (size_t new_tail)
 Makes newly spawned tasks visible to thieves. More...
 
void commit_relocated_tasks (size_t new_tail)
 Makes relocated tasks visible to thieves and releases the local task pool. More...
 
taskget_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Get a task from the local pool. More...
 
taskget_task (size_t T)
 Get a task from the local pool at specified location T. More...
 
taskget_mailbox_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Attempt to get a task from the mailbox. More...
 
tasksteal_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Attempts to steal a task from a randomly chosen thread/scheduler. More...
 
tasksteal_task_from (__TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation))
 Steal task from another scheduler's ready pool. More...
 
size_t prepare_task_pool (size_t n)
 Makes sure that the task pool can accommodate at least n more elements. More...
 
bool cleanup_master (bool blocking_terminate)
 Perform necessary cleanup when a master thread stops using TBB. More...
 
void assert_task_pool_valid () const
 
void attach_arena (arena *, size_t index, bool is_master)
 
void nested_arena_entry (arena *, size_t)
 
void nested_arena_exit ()
 
void wait_until_empty ()
 
void spawn (task &first, task *&next) __TBB_override
 For internal use only. More...
 
void spawn_root_and_wait (task &first, task *&next) __TBB_override
 For internal use only. More...
 
void enqueue (task &, void *reserved) __TBB_override
 For internal use only. More...
 
void local_spawn (task *first, task *&next)
 
void local_spawn_root_and_wait (task *first, task *&next)
 
virtual void local_wait_for_all (task &parent, task *child)=0
 
void destroy ()
 Destroy and deallocate this scheduler object. More...
 
void cleanup_scheduler ()
 Cleans up this scheduler (the scheduler might be destroyed). More...
 
taskallocate_task (size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
 Allocate task object, either from the heap or a free list. More...
 
template<free_task_hint h>
void free_task (task &t)
 Put task on free list. More...
 
void deallocate_task (task &t)
 Return task object to the memory allocator. More...
 
bool is_worker () const
 True if running on a worker thread, false otherwise. More...
 
bool outermost_level () const
 True if the scheduler is on the outermost dispatch level. More...
 
bool master_outermost_level () const
 True if the scheduler is on the outermost dispatch level in a master thread. More...
 
bool worker_outermost_level () const
 True if the scheduler is on the outermost dispatch level in a worker thread. More...
 
unsigned max_threads_in_arena ()
 Returns the concurrency limit of the current arena. More...
 
virtual taskreceive_or_steal_task (__TBB_ISOLATION_ARG(__TBB_atomic reference_count &completion_ref_count, isolation_tag isolation))=0
 Try getting a task from other threads (via mailbox, stealing, FIFO queue, orphans adoption). More...
 
void free_nonlocal_small_task (task &t)
 Free a small task t that that was allocated by a different scheduler. More...
 
- Public Member Functions inherited from tbb::internal::scheduler
virtual void wait_for_all (task &parent, task *child)=0
 For internal use only. More...
 
virtual ~scheduler ()=0
 Pure virtual destructor;. More...
 

Static Public Member Functions

static bool is_version_3_task (task &t)
 
static bool is_proxy (const task &t)
 True if t is a task_proxy. More...
 
static generic_schedulercreate_master (arena *a)
 Initialize a scheduler for a master thread. More...
 
static generic_schedulercreate_worker (market &m, size_t index, bool geniune)
 Initialize a scheduler for a worker thread. More...
 
static void cleanup_worker (void *arg, bool worker)
 Perform necessary cleanup when a worker thread finishes. More...
 
static taskplugged_return_list ()
 Special value used to mark my_return_list as not taking any more entries. More...
 

Public Attributes

uintptr_t my_stealing_threshold
 Position in the call stack specifying its maximal filling when stealing is still allowed. More...
 
marketmy_market
 The market I am in. More...
 
FastRandom my_random
 Random number generator used for picking a random victim from which to steal. More...
 
taskmy_free_list
 Free list of small tasks that can be reused. More...
 
taskmy_dummy_task
 Fake root task created by slave threads. More...
 
long my_ref_count
 Reference count for scheduler. More...
 
bool my_auto_initialized
 True if *this was created by automatic TBB initialization. More...
 
__TBB_atomic intptr_t my_small_task_count
 Number of small tasks that have been allocated by this scheduler. More...
 
taskmy_return_list
 List of small tasks that have been returned to this scheduler by other schedulers. More...
 
- Public Attributes inherited from tbb::internal::intrusive_list_node
intrusive_list_nodemy_prev_node
 
intrusive_list_nodemy_next_node
 
- Public Attributes inherited from tbb::internal::scheduler_state
size_t my_arena_index
 Index of the arena slot the scheduler occupies now, or occupied last time. More...
 
arena_slotmy_arena_slot
 Pointer to the slot in the arena we own at the moment. More...
 
arenamy_arena
 The arena that I own (if master) or am servicing at the moment (if worker) More...
 
taskmy_innermost_running_task
 Innermost task whose task::execute() is running. A dummy task on the outermost level. More...
 
mail_inbox my_inbox
 
affinity_id my_affinity_id
 The mailbox id assigned to this scheduler. More...
 
scheduler_properties my_properties
 

Static Public Attributes

static const size_t quick_task_size = 256-task_prefix_reservation_size
 If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd. More...
 
static const size_t null_arena_index = ~size_t(0)
 
static const size_t min_task_pool_size = 64
 

Protected Member Functions

 generic_scheduler (market &, bool)
 

Friends

template<typename SchedulerTraits >
class custom_scheduler
 

Detailed Description

Work stealing task scheduler.

None of the fields here are ever read or written by threads other than the thread that creates the instance.

Class generic_scheduler is an abstract base class that contains most of the scheduler, except for tweaks specific to processors and tools (e.g. VTune(TM) Performance Tools). The derived template class custom_scheduler<SchedulerTraits> fills in the tweaks.

Definition at line 137 of file scheduler.h.

Constructor & Destructor Documentation

◆ generic_scheduler()

tbb::internal::generic_scheduler::generic_scheduler ( market m,
bool  genuine 
)
protected

Definition at line 84 of file scheduler.cpp.

85  : my_market(&m)
86  , my_random(this)
87  , my_ref_count(1)
88 #if __TBB_PREVIEW_RESUMABLE_TASKS
89  , my_co_context(m.worker_stack_size(), genuine ? NULL : this)
90 #endif
91  , my_small_task_count(1) // Extra 1 is a guard reference
92 #if __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT
93  , my_cilk_state(cs_none)
94 #endif /* __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT */
95 {
96  __TBB_ASSERT( !my_arena_index, "constructor expects the memory being zero-initialized" );
97  __TBB_ASSERT( governor::is_set(NULL), "scheduler is already initialized for this thread" );
98 
99  my_innermost_running_task = my_dummy_task = &allocate_task( sizeof(task), __TBB_CONTEXT_ARG(NULL, &the_dummy_context) );
100 #if __TBB_PREVIEW_CRITICAL_TASKS
101  my_properties.has_taken_critical_task = false;
102 #endif
103 #if __TBB_PREVIEW_RESUMABLE_TASKS
104  my_properties.genuine = genuine;
105  my_current_is_recalled = NULL;
106  my_post_resume_action = PRA_NONE;
107  my_post_resume_arg = NULL;
108  my_wait_task = NULL;
109 #else
110  suppress_unused_warning(genuine);
111 #endif
112  my_properties.outermost = true;
113 #if __TBB_TASK_PRIORITY
114  my_ref_top_priority = &m.my_global_top_priority;
115  my_ref_reload_epoch = &m.my_global_reload_epoch;
116 #endif /* __TBB_TASK_PRIORITY */
117 #if __TBB_TASK_GROUP_CONTEXT
118  // Sync up the local cancellation state with the global one. No need for fence here.
119  my_context_state_propagation_epoch = the_context_state_propagation_epoch;
120  my_context_list_head.my_prev = &my_context_list_head;
121  my_context_list_head.my_next = &my_context_list_head;
122  ITT_SYNC_CREATE(&my_context_list_mutex, SyncType_Scheduler, SyncObj_ContextsList);
123 #endif /* __TBB_TASK_GROUP_CONTEXT */
124  ITT_SYNC_CREATE(&my_dummy_task->prefix().ref_count, SyncType_Scheduler, SyncObj_WorkerLifeCycleMgmt);
125  ITT_SYNC_CREATE(&my_return_list, SyncType_Scheduler, SyncObj_TaskReturnList);
126 }

References __TBB_ASSERT, __TBB_CONTEXT_ARG, allocate_task(), tbb::internal::governor::is_set(), ITT_SYNC_CREATE, tbb::internal::scheduler_state::my_arena_index, my_dummy_task, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::scheduler_state::my_properties, my_return_list, tbb::internal::scheduler_properties::outermost, tbb::task::prefix(), tbb::internal::task_prefix::ref_count, and tbb::internal::suppress_unused_warning().

Here is the call graph for this function:

Member Function Documentation

◆ acquire_task_pool()

void tbb::internal::generic_scheduler::acquire_task_pool ( ) const
inline

Locks the local task pool.

Garbles my_arena_slot->task_pool for the duration of the lock. Requires correctly set my_arena_slot->task_pool_ptr.

ATTENTION: This method is mostly the same as generic_scheduler::lock_task_pool(), with a little different logic of slot state checks (slot is either locked or points to our task pool). Thus if either of them is changed, consider changing the counterpart as well.

Definition at line 493 of file scheduler.cpp.

493  {
494  if ( !is_task_pool_published() )
495  return; // we are not in arena - nothing to lock
496  bool sync_prepare_done = false;
497  for( atomic_backoff b;;b.pause() ) {
498 #if TBB_USE_ASSERT
499  __TBB_ASSERT( my_arena_slot == my_arena->my_slots + my_arena_index, "invalid arena slot index" );
500  // Local copy of the arena slot task pool pointer is necessary for the next
501  // assertion to work correctly to exclude asynchronous state transition effect.
502  task** tp = my_arena_slot->task_pool;
503  __TBB_ASSERT( tp == LockedTaskPool || tp == my_arena_slot->task_pool_ptr, "slot ownership corrupt?" );
504 #endif
507  {
508  // We acquired our own slot
509  ITT_NOTIFY(sync_acquired, my_arena_slot);
510  break;
511  }
512  else if( !sync_prepare_done ) {
513  // Start waiting
514  ITT_NOTIFY(sync_prepare, my_arena_slot);
515  sync_prepare_done = true;
516  }
517  // Someone else acquired a lock, so pause and do exponential backoff.
518  }
519  __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "not really acquired task pool" );
520 } // generic_scheduler::acquire_task_pool

References __TBB_ASSERT, tbb::internal::as_atomic(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, tbb::internal::atomic_backoff::pause(), tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by cleanup_master(), get_task(), and prepare_task_pool().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ allocate_task()

task & tbb::internal::generic_scheduler::allocate_task ( size_t  number_of_bytes,
__TBB_CONTEXT_ARG(task *parent, task_group_context *context)   
)

Allocate task object, either from the heap or a free list.

Returns uninitialized task object with initialized prefix.

Definition at line 337 of file scheduler.cpp.

338  {
339  GATHER_STATISTIC(++my_counters.active_tasks);
340  task *t;
341  if( number_of_bytes<=quick_task_size ) {
342 #if __TBB_HOARD_NONLOCAL_TASKS
343  if( (t = my_nonlocal_free_list) ) {
344  GATHER_STATISTIC(--my_counters.free_list_length);
345  __TBB_ASSERT( t->state()==task::freed, "free list of tasks is corrupted" );
346  my_nonlocal_free_list = t->prefix().next;
347  } else
348 #endif
349  if( (t = my_free_list) ) {
350  GATHER_STATISTIC(--my_counters.free_list_length);
351  __TBB_ASSERT( t->state()==task::freed, "free list of tasks is corrupted" );
352  my_free_list = t->prefix().next;
353  } else if( my_return_list ) {
354  // No fence required for read of my_return_list above, because __TBB_FetchAndStoreW has a fence.
355  t = (task*)__TBB_FetchAndStoreW( &my_return_list, 0 ); // with acquire
356  __TBB_ASSERT( t, "another thread emptied the my_return_list" );
357  __TBB_ASSERT( t->prefix().origin==this, "task returned to wrong my_return_list" );
358  ITT_NOTIFY( sync_acquired, &my_return_list );
359  my_free_list = t->prefix().next;
360  } else {
362 #if __TBB_COUNT_TASK_NODES
363  ++my_task_node_count;
364 #endif /* __TBB_COUNT_TASK_NODES */
365  t->prefix().origin = this;
366  t->prefix().next = 0;
368  }
369 #if __TBB_PREFETCHING
370  task *t_next = t->prefix().next;
371  if( !t_next ) { // the task was last in the list
372 #if __TBB_HOARD_NONLOCAL_TASKS
373  if( my_free_list )
374  t_next = my_free_list;
375  else
376 #endif
377  if( my_return_list ) // enable prefetching, gives speedup
378  t_next = my_free_list = (task*)__TBB_FetchAndStoreW( &my_return_list, 0 );
379  }
380  if( t_next ) { // gives speedup for both cache lines
381  __TBB_cl_prefetch(t_next);
382  __TBB_cl_prefetch(&t_next->prefix());
383  }
384 #endif /* __TBB_PREFETCHING */
385  } else {
386  GATHER_STATISTIC(++my_counters.big_tasks);
387  t = (task*)((char*)NFS_Allocate( 1, task_prefix_reservation_size+number_of_bytes, NULL ) + task_prefix_reservation_size );
388 #if __TBB_COUNT_TASK_NODES
389  ++my_task_node_count;
390 #endif /* __TBB_COUNT_TASK_NODES */
391  t->prefix().origin = NULL;
392  }
393  task_prefix& p = t->prefix();
394 #if __TBB_TASK_GROUP_CONTEXT
395  p.context = context;
396 #endif /* __TBB_TASK_GROUP_CONTEXT */
397  // Obsolete. But still in use, so has to be assigned correct value here.
398  p.owner = this;
399  p.ref_count = 0;
400  // Obsolete. Assign some not outrageously out-of-place value for a while.
401  p.depth = 0;
402  p.parent = parent;
403  // In TBB 2.1 and later, the constructor for task sets extra_state to indicate the version of the tbb/task.h header.
404  // In TBB 2.0 and earlier, the constructor leaves extra_state as zero.
405  p.extra_state = 0;
406  p.affinity = 0;
407  p.state = task::allocated;
408  __TBB_ISOLATION_EXPR( p.isolation = no_isolation );
409  return *t;
410 }

References __TBB_ASSERT, __TBB_cl_prefetch, __TBB_ISOLATION_EXPR, tbb::task::allocated, tbb::task::freed, GATHER_STATISTIC, ITT_NOTIFY, my_free_list, my_return_list, my_small_task_count, tbb::internal::task_prefix::next, tbb::internal::NFS_Allocate(), tbb::internal::no_isolation, tbb::internal::task_prefix::origin, p, parent, tbb::task::prefix(), quick_task_size, tbb::task::state(), and tbb::internal::task_prefix_reservation_size.

Referenced by tbb::internal::allocate_root_proxy::allocate(), generic_scheduler(), and prepare_for_spawning().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ assert_task_pool_valid()

void tbb::internal::generic_scheduler::assert_task_pool_valid ( ) const
inline

Definition at line 398 of file scheduler.h.

398 {}

Referenced by local_spawn(), prepare_task_pool(), and tbb::task::self().

Here is the caller graph for this function:

◆ attach_arena()

void tbb::internal::generic_scheduler::attach_arena ( arena a,
size_t  index,
bool  is_master 
)

Definition at line 80 of file arena.cpp.

80  {
81  __TBB_ASSERT( a->my_market == my_market, NULL );
82  my_arena = a;
83  my_arena_index = index;
84  my_arena_slot = a->my_slots + index;
85  attach_mailbox( affinity_id(index+1) );
86  if ( is_master && my_inbox.is_idle_state( true ) ) {
87  // Master enters an arena with its own task to be executed. It means that master is not
88  // going to enter stealing loop and take affinity tasks.
89  my_inbox.set_is_idle( false );
90  }
91 #if __TBB_TASK_GROUP_CONTEXT
92  // Context to be used by root tasks by default (if the user has not specified one).
93  if( !is_master )
94  my_dummy_task->prefix().context = a->my_default_ctx;
95 #endif /* __TBB_TASK_GROUP_CONTEXT */
96 #if __TBB_TASK_PRIORITY
97  // In the current implementation master threads continue processing even when
98  // there are other masters with higher priority. Only TBB worker threads are
99  // redistributed between arenas based on the latters' priority. Thus master
100  // threads use arena's top priority as a reference point (in contrast to workers
101  // that use my_market->my_global_top_priority).
102  if( is_master ) {
103  my_ref_top_priority = &a->my_top_priority;
104  my_ref_reload_epoch = &a->my_reload_epoch;
105  }
106  my_local_reload_epoch = *my_ref_reload_epoch;
107  __TBB_ASSERT( !my_offloaded_tasks, NULL );
108 #endif /* __TBB_TASK_PRIORITY */
109 }

References __TBB_ASSERT, attach_mailbox(), tbb::internal::task_prefix::context, tbb::internal::mail_inbox::is_idle_state(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, tbb::internal::scheduler_state::my_inbox, my_market, tbb::internal::arena_base::my_market, tbb::internal::arena::my_slots, tbb::task::prefix(), and tbb::internal::mail_inbox::set_is_idle().

Referenced by nested_arena_entry().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ attach_mailbox()

void tbb::internal::generic_scheduler::attach_mailbox ( affinity_id  id)
inline

Definition at line 667 of file scheduler.h.

667  {
668  __TBB_ASSERT(id>0,NULL);
670  my_affinity_id = id;
671 }

References __TBB_ASSERT, tbb::internal::mail_inbox::attach(), id, tbb::internal::arena::mailbox(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, and tbb::internal::scheduler_state::my_inbox.

Referenced by attach_arena().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ can_steal()

bool tbb::internal::generic_scheduler::can_steal ( )
inline

Returns true if stealing is allowed.

Definition at line 270 of file scheduler.h.

270  {
271  int anchor;
272  // TODO IDEA: Add performance warning?
273 #if __TBB_ipf
274  return my_stealing_threshold < (uintptr_t)&anchor && (uintptr_t)__TBB_get_bsp() < my_rsb_stealing_threshold;
275 #else
276  return my_stealing_threshold < (uintptr_t)&anchor;
277 #endif
278  }

References __TBB_get_bsp(), and my_stealing_threshold.

Here is the call graph for this function:

◆ cleanup_master()

bool tbb::internal::generic_scheduler::cleanup_master ( bool  blocking_terminate)

Perform necessary cleanup when a master thread stops using TBB.

Definition at line 1339 of file scheduler.cpp.

1339  {
1340  arena* const a = my_arena;
1341  market * const m = my_market;
1342  __TBB_ASSERT( my_market, NULL );
1343  if( a && is_task_pool_published() ) {
1347  {
1348  // Local task pool is empty
1349  leave_task_pool();
1350  }
1351  else {
1352  // Master's local task pool may e.g. contain proxies of affinitized tasks.
1354  __TBB_ASSERT ( governor::is_set(this), "TLS slot is cleared before the task pool cleanup" );
1355  // Set refcount to make the following dispach loop infinite (it is interrupted by the cleanup logic).
1359  __TBB_ASSERT ( governor::is_set(this), "Other thread reused our TLS key during the task pool cleanup" );
1360  }
1361  }
1362 #if __TBB_ARENA_OBSERVER
1363  if( a )
1364  a->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
1365 #endif
1366 #if __TBB_SCHEDULER_OBSERVER
1367  the_global_observer_list.notify_exit_observers( my_last_global_observer, /*worker=*/false );
1368 #endif /* __TBB_SCHEDULER_OBSERVER */
1369 #if _WIN32||_WIN64
1370  m->unregister_master( master_exec_resource );
1371 #endif /* _WIN32||_WIN64 */
1372  if( a ) {
1373  __TBB_ASSERT(a->my_slots+0 == my_arena_slot, NULL);
1374 #if __TBB_STATISTICS
1375  *my_arena_slot->my_counters += my_counters;
1376 #endif /* __TBB_STATISTICS */
1378  }
1379 #if __TBB_TASK_GROUP_CONTEXT
1380  else { // task_group_context ownership was not transferred to arena
1381  default_context()->~task_group_context();
1382  NFS_Free(default_context());
1383  }
1384  context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
1385  my_market->my_masters.remove( *this );
1386  lock.release();
1387 #endif /* __TBB_TASK_GROUP_CONTEXT */
1388  my_arena_slot = NULL; // detached from slot
1389  cleanup_scheduler(); // do not use scheduler state after this point
1390 
1391  if( a )
1392  a->on_thread_leaving<arena::ref_external>();
1393  // If there was an associated arena, it added a public market reference
1394  return m->release( /*is_public*/ a != NULL, blocking_terminate );
1395 }

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_with_release(), acquire_task_pool(), cleanup_scheduler(), EmptyTaskPool, tbb::internal::arena_slot_line1::head, tbb::internal::governor::is_set(), is_task_pool_published(), leave_task_pool(), local_wait_for_all(), lock, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, my_market, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::arena::my_slots, tbb::internal::NFS_Free(), tbb::internal::arena::on_thread_leaving(), tbb::internal::arena::ref_external, tbb::internal::market::release(), release_task_pool(), tbb::task::set_ref_count(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line1::task_pool.

Here is the call graph for this function:

◆ cleanup_scheduler()

void tbb::internal::generic_scheduler::cleanup_scheduler ( )

Cleans up this scheduler (the scheduler might be destroyed).

Definition at line 294 of file scheduler.cpp.

294  {
295  __TBB_ASSERT( !my_arena_slot, NULL );
296 #if __TBB_TASK_PRIORITY
297  __TBB_ASSERT( my_offloaded_tasks == NULL, NULL );
298 #endif
299 #if __TBB_PREVIEW_CRITICAL_TASKS
300  __TBB_ASSERT( !my_properties.has_taken_critical_task, "Critical tasks miscount." );
301 #endif
302 #if __TBB_TASK_GROUP_CONTEXT
303  cleanup_local_context_list();
304 #endif /* __TBB_TASK_GROUP_CONTEXT */
305  free_task<small_local_task>( *my_dummy_task );
306 
307 #if __TBB_HOARD_NONLOCAL_TASKS
308  while( task* t = my_nonlocal_free_list ) {
309  task_prefix& p = t->prefix();
310  my_nonlocal_free_list = p.next;
311  __TBB_ASSERT( p.origin && p.origin!=this, NULL );
313  }
314 #endif
315  // k accounts for a guard reference and each task that we deallocate.
316  intptr_t k = 1;
317  for(;;) {
318  while( task* t = my_free_list ) {
319  my_free_list = t->prefix().next;
320  deallocate_task(*t);
321  ++k;
322  }
324  break;
325  my_free_list = (task*)__TBB_FetchAndStoreW( &my_return_list, (intptr_t)plugged_return_list() );
326  }
327 #if __TBB_COUNT_TASK_NODES
328  my_market->update_task_node_count( my_task_node_count );
329 #endif /* __TBB_COUNT_TASK_NODES */
330  // Update my_small_task_count last. Doing so sooner might cause another thread to free *this.
331  __TBB_ASSERT( my_small_task_count>=k, "my_small_task_count corrupted" );
332  governor::sign_off(this);
333  if( __TBB_FetchAndAddW( &my_small_task_count, -k )==k )
334  destroy();
335 }

References __TBB_ASSERT, deallocate_task(), destroy(), free_nonlocal_small_task(), tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, my_free_list, my_market, tbb::internal::scheduler_state::my_properties, my_return_list, my_small_task_count, tbb::internal::task_prefix::next, p, plugged_return_list(), tbb::task::prefix(), and tbb::internal::governor::sign_off().

Referenced by cleanup_master().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ cleanup_worker()

void tbb::internal::generic_scheduler::cleanup_worker ( void arg,
bool  worker 
)
static

Perform necessary cleanup when a worker thread finishes.

Definition at line 1329 of file scheduler.cpp.

1329  {
1331  __TBB_ASSERT( !s.my_arena_slot, "cleaning up attached worker" );
1332 #if __TBB_SCHEDULER_OBSERVER
1333  if ( worker ) // can be called by master for worker, do not notify master twice
1334  the_global_observer_list.notify_exit_observers( s.my_last_global_observer, /*worker=*/true );
1335 #endif /* __TBB_SCHEDULER_OBSERVER */
1336  s.cleanup_scheduler();
1337 }

References __TBB_ASSERT, and s.

Referenced by tbb::internal::market::cleanup().

Here is the caller graph for this function:

◆ commit_relocated_tasks()

void tbb::internal::generic_scheduler::commit_relocated_tasks ( size_t  new_tail)
inline

Makes relocated tasks visible to thieves and releases the local task pool.

Obviously, the task pool must be locked when calling this method.

Definition at line 719 of file scheduler.h.

719  {
721  "Task pool must be locked when calling commit_relocated_tasks()" );
723  // Tail is updated last to minimize probability of a thread making arena
724  // snapshot being misguided into thinking that this task pool is empty.
725  __TBB_store_release( my_arena_slot->tail, new_tail );
727 }

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), __TBB_store_release, tbb::internal::arena_slot_line1::head, is_local_task_pool_quiescent(), tbb::internal::scheduler_state::my_arena_slot, release_task_pool(), and tbb::internal::arena_slot_line2::tail.

Referenced by prepare_task_pool().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ commit_spawned_tasks()

void tbb::internal::generic_scheduler::commit_spawned_tasks ( size_t  new_tail)
inline

Makes newly spawned tasks visible to thieves.

Definition at line 710 of file scheduler.h.

710  {
711  __TBB_ASSERT ( new_tail <= my_arena_slot->my_task_pool_size, "task deque end was overwritten" );
712  // emit "task was released" signal
713  ITT_NOTIFY(sync_releasing, (void*)((uintptr_t)my_arena_slot+sizeof(uintptr_t)));
714  // Release fence is necessary to make sure that previously stored task pointers
715  // are visible to thieves.
717 }

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), ITT_NOTIFY, tbb::internal::scheduler_state::my_arena_slot, sync_releasing, and tbb::internal::arena_slot_line2::tail.

Referenced by local_spawn().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ create_master()

generic_scheduler * tbb::internal::generic_scheduler::create_master ( arena a)
static

Initialize a scheduler for a master thread.

Definition at line 1285 of file scheduler.cpp.

1285  {
1286  // add an internal market reference; the public reference is possibly added in create_arena
1287  generic_scheduler* s = allocate_scheduler( market::global_market(/*is_public=*/false), /* genuine = */ true );
1288  __TBB_ASSERT( !s->my_arena, NULL );
1289  __TBB_ASSERT( s->my_market, NULL );
1290  task& t = *s->my_dummy_task;
1291  s->my_properties.type = scheduler_properties::master;
1292  t.prefix().ref_count = 1;
1293 #if __TBB_TASK_GROUP_CONTEXT
1294  t.prefix().context = new ( NFS_Allocate(1, sizeof(task_group_context), NULL) )
1296 #if __TBB_FP_CONTEXT
1297  s->default_context()->capture_fp_settings();
1298 #endif
1299  // Do not call init_stack_info before the scheduler is set as master or worker.
1300  s->init_stack_info();
1301  context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
1302  s->my_market->my_masters.push_front( *s );
1303  lock.release();
1304 #endif /* __TBB_TASK_GROUP_CONTEXT */
1305  if( a ) {
1306  // Master thread always occupies the first slot
1307  s->attach_arena( a, /*index*/0, /*is_master*/true );
1308  s->my_arena_slot->my_scheduler = s;
1309 #if __TBB_TASK_GROUP_CONTEXT
1310  a->my_default_ctx = s->default_context(); // also transfers implied ownership
1311 #endif
1312  }
1313  __TBB_ASSERT( s->my_arena_index == 0, "Master thread must occupy the first slot in its arena" );
1315 
1316 #if _WIN32||_WIN64
1317  s->my_market->register_master( s->master_exec_resource );
1318 #endif /* _WIN32||_WIN64 */
1319  // Process any existing observers.
1320 #if __TBB_ARENA_OBSERVER
1321  __TBB_ASSERT( !a || a->my_observers.empty(), "Just created arena cannot have any observers associated with it" );
1322 #endif
1323 #if __TBB_SCHEDULER_OBSERVER
1324  the_global_observer_list.notify_entry_observers( s->my_last_global_observer, /*worker=*/false );
1325 #endif /* __TBB_SCHEDULER_OBSERVER */
1326  return s;
1327 }

References __TBB_ASSERT, tbb::internal::allocate_scheduler(), tbb::internal::task_prefix::context, tbb::task_group_context::default_traits, tbb::internal::market::global_market(), tbb::task_group_context::isolated, lock, tbb::internal::scheduler_properties::master, tbb::internal::NFS_Allocate(), tbb::task::prefix(), tbb::internal::task_prefix::ref_count, s, and tbb::internal::governor::sign_on().

Referenced by tbb::internal::governor::init_scheduler(), and tbb::internal::governor::init_scheduler_weak().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ create_worker()

generic_scheduler * tbb::internal::generic_scheduler::create_worker ( market m,
size_t  index,
bool  geniune 
)
static

Initialize a scheduler for a worker thread.

Definition at line 1271 of file scheduler.cpp.

1271  {
1272  generic_scheduler* s = allocate_scheduler( m, genuine );
1273  __TBB_ASSERT(!genuine || index, "workers should have index > 0");
1274  s->my_arena_index = index; // index is not a real slot in arena yet
1275  s->my_dummy_task->prefix().ref_count = 2;
1276  s->my_properties.type = scheduler_properties::worker;
1277  // Do not call init_stack_info before the scheduler is set as master or worker.
1278  if (genuine)
1279  s->init_stack_info();
1281  return s;
1282 }

References __TBB_ASSERT, tbb::internal::allocate_scheduler(), s, tbb::internal::governor::sign_on(), and tbb::internal::scheduler_properties::worker.

Referenced by tbb::internal::market::create_one_job().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ deallocate_task()

void tbb::internal::generic_scheduler::deallocate_task ( task t)
inline

Return task object to the memory allocator.

Definition at line 683 of file scheduler.h.

683  {
684 #if TBB_USE_ASSERT
685  task_prefix& p = t.prefix();
686  p.state = 0xFF;
687  p.extra_state = 0xFF;
688  poison_pointer(p.next);
689 #endif /* TBB_USE_ASSERT */
691 #if __TBB_COUNT_TASK_NODES
692  --my_task_node_count;
693 #endif /* __TBB_COUNT_TASK_NODES */
694 }

References tbb::internal::NFS_Free(), p, tbb::internal::poison_pointer(), tbb::task::prefix(), and tbb::internal::task_prefix_reservation_size.

Referenced by cleanup_scheduler(), free_nonlocal_small_task(), and free_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ destroy()

void tbb::internal::generic_scheduler::destroy ( )

Destroy and deallocate this scheduler object.

Definition at line 285 of file scheduler.cpp.

285  {
286  __TBB_ASSERT(my_small_task_count == 0, "The scheduler is still in use.");
287  this->~generic_scheduler();
288 #if TBB_USE_DEBUG
289  memset((void*)this, -1, sizeof(generic_scheduler));
290 #endif
291  NFS_Free(this);
292 }

References __TBB_ASSERT, my_small_task_count, and tbb::internal::NFS_Free().

Referenced by cleanup_scheduler().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ enqueue()

void tbb::internal::generic_scheduler::enqueue ( task t,
void reserved 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 747 of file scheduler.cpp.

747  {
749  // these redirections are due to bw-compatibility, consider reworking some day
750  __TBB_ASSERT( s->my_arena, "thread is not in any arena" );
751  s->my_arena->enqueue_task(t, (intptr_t)prio, s->my_random );
752 }

References __TBB_ASSERT, tbb::internal::governor::local_scheduler(), and s.

Here is the call graph for this function:

◆ free_nonlocal_small_task()

void tbb::internal::generic_scheduler::free_nonlocal_small_task ( task t)

Free a small task t that that was allocated by a different scheduler.

Definition at line 412 of file scheduler.cpp.

412  {
413  __TBB_ASSERT( t.state()==task::freed, NULL );
414  generic_scheduler& s = *static_cast<generic_scheduler*>(t.prefix().origin);
415  __TBB_ASSERT( &s!=this, NULL );
416  for(;;) {
417  task* old = s.my_return_list;
418  if( old==plugged_return_list() )
419  break;
420  // Atomically insert t at head of s.my_return_list
421  t.prefix().next = old;
422  ITT_NOTIFY( sync_releasing, &s.my_return_list );
423  if( as_atomic(s.my_return_list).compare_and_swap(&t, old )==old ) {
424 #if __TBB_PREFETCHING
425  __TBB_cl_evict(&t.prefix());
426  __TBB_cl_evict(&t);
427 #endif
428  return;
429  }
430  }
431  deallocate_task(t);
432  if( __TBB_FetchAndDecrementWrelease( &s.my_small_task_count )==1 ) {
433  // We freed the last task allocated by scheduler s, so it's our responsibility
434  // to free the scheduler.
435  s.destroy();
436  }
437 }

References __TBB_ASSERT, __TBB_cl_evict, __TBB_FetchAndDecrementWrelease, tbb::internal::as_atomic(), deallocate_task(), tbb::task::freed, ITT_NOTIFY, tbb::internal::task_prefix::next, tbb::internal::task_prefix::origin, plugged_return_list(), tbb::task::prefix(), s, tbb::task::state(), and sync_releasing.

Referenced by cleanup_scheduler(), and free_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ free_task()

template<free_task_hint hint>
void tbb::internal::generic_scheduler::free_task ( task t)

Put task on free list.

Does not call destructor.

Definition at line 730 of file scheduler.h.

730  {
731 #if __TBB_HOARD_NONLOCAL_TASKS
732  static const int h = hint&(~local_task);
733 #else
734  static const free_task_hint h = hint;
735 #endif
736  GATHER_STATISTIC(--my_counters.active_tasks);
737  task_prefix& p = t.prefix();
738  // Verify that optimization hints are correct.
739  __TBB_ASSERT( h!=small_local_task || p.origin==this, NULL );
740  __TBB_ASSERT( !(h&small_task) || p.origin, NULL );
741  __TBB_ASSERT( !(h&local_task) || (!p.origin || uintptr_t(p.origin) > uintptr_t(4096)), "local_task means allocated");
742  poison_value(p.depth);
743  poison_value(p.ref_count);
744  poison_pointer(p.owner);
745 #if __TBB_PREVIEW_RESUMABLE_TASKS
746  __TBB_ASSERT(1L << t.state() & (1L << task::executing | 1L << task::allocated | 1 << task::to_resume), NULL);
747 #else
748  __TBB_ASSERT(1L << t.state() & (1L << task::executing | 1L << task::allocated), NULL);
749 #endif
750  p.state = task::freed;
751  if( h==small_local_task || p.origin==this ) {
752  GATHER_STATISTIC(++my_counters.free_list_length);
753  p.next = my_free_list;
754  my_free_list = &t;
755  } else if( !(h&local_task) && p.origin && uintptr_t(p.origin) < uintptr_t(4096) ) {
756  // a special value reserved for future use, do nothing since
757  // origin is not pointing to a scheduler instance
758  } else if( !(h&local_task) && p.origin ) {
759  GATHER_STATISTIC(++my_counters.free_list_length);
760 #if __TBB_HOARD_NONLOCAL_TASKS
761  if( !(h&no_cache) ) {
762  p.next = my_nonlocal_free_list;
763  my_nonlocal_free_list = &t;
764  } else
765 #endif
767  } else {
768  GATHER_STATISTIC(--my_counters.big_tasks);
769  deallocate_task(t);
770  }
771 }

References __TBB_ASSERT, tbb::task::allocated, deallocate_task(), tbb::task::executing, free_nonlocal_small_task(), tbb::task::freed, GATHER_STATISTIC, h, tbb::internal::local_task, my_free_list, tbb::internal::no_cache, p, tbb::internal::poison_pointer(), poison_value, tbb::task::prefix(), tbb::internal::small_local_task, tbb::internal::small_task, and tbb::task::state().

Referenced by tbb::interface5::internal::task_base::destroy(), tbb::internal::allocate_additional_child_of_proxy::free(), tbb::internal::allocate_root_proxy::free(), tbb::internal::allocate_continuation_proxy::free(), tbb::internal::allocate_child_proxy::free(), and tbb::internal::auto_empty_task::~auto_empty_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ get_mailbox_task()

task * tbb::internal::generic_scheduler::get_mailbox_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )

Attempt to get a task from the mailbox.

Gets a task only if it has not been executed by its sender or a thief that has stolen it from the sender's task pool. Otherwise returns NULL.

This method is intended to be used only by the thread extracting the proxy from its mailbox. (In contrast to local task pool, mailbox can be read only by its owner).

Definition at line 1232 of file scheduler.cpp.

1232  {
1233  __TBB_ASSERT( my_affinity_id>0, "not in arena" );
1234  while ( task_proxy* const tp = my_inbox.pop( __TBB_ISOLATION_EXPR( isolation ) ) ) {
1235  if ( task* result = tp->extract_task<task_proxy::mailbox_bit>() ) {
1236  ITT_NOTIFY( sync_acquired, my_inbox.outbox() );
1237  result->prefix().extra_state |= es_task_is_stolen;
1238  return result;
1239  }
1240  // We have exclusive access to the proxy, and can destroy it.
1241  free_task<no_cache_small_task>(*tp);
1242  }
1243  return NULL;
1244 }

References __TBB_ASSERT, __TBB_ISOLATION_EXPR, tbb::internal::es_task_is_stolen, ITT_NOTIFY, tbb::internal::task_proxy::mailbox_bit, tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_inbox, and tbb::internal::mail_inbox::pop().

Here is the call graph for this function:

◆ get_task() [1/2]

task * tbb::internal::generic_scheduler::get_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )
inline

Get a task from the local pool.

Called only by the pool owner. Returns the pointer to the task or NULL if a suitable task is not found. Resets the pool if it is empty.

Definition at line 1010 of file scheduler.cpp.

1010  {
1012  // The current task position in the task pool.
1013  size_t T0 = __TBB_load_relaxed( my_arena_slot->tail );
1014  // The bounds of available tasks in the task pool. H0 is only used when the head bound is reached.
1015  size_t H0 = (size_t)-1, T = T0;
1016  task* result = NULL;
1017  bool task_pool_empty = false;
1018  __TBB_ISOLATION_EXPR( bool tasks_omitted = false );
1019  do {
1020  __TBB_ASSERT( !result, NULL );
1022  atomic_fence();
1023  if ( (intptr_t)__TBB_load_relaxed( my_arena_slot->head ) > (intptr_t)T ) {
1026  if ( (intptr_t)H0 > (intptr_t)T ) {
1027  // The thief has not backed off - nothing to grab.
1030  && H0 == T + 1, "victim/thief arbitration algorithm failure" );
1032  // No tasks in the task pool.
1033  task_pool_empty = true;
1034  break;
1035  } else if ( H0 == T ) {
1036  // There is only one task in the task pool.
1038  task_pool_empty = true;
1039  } else {
1040  // Release task pool if there are still some tasks.
1041  // After the release, the tail will be less than T, thus a thief
1042  // will not attempt to get a task at position T.
1044  }
1045  }
1046  __TBB_control_consistency_helper(); // on my_arena_slot->head
1047 #if __TBB_TASK_ISOLATION
1048  result = get_task( T, isolation, tasks_omitted );
1049  if ( result ) {
1051  break;
1052  } else if ( !tasks_omitted ) {
1054  __TBB_ASSERT( T0 == T+1, NULL );
1055  T0 = T;
1056  }
1057 #else
1058  result = get_task( T );
1059 #endif /* __TBB_TASK_ISOLATION */
1060  } while ( !result && !task_pool_empty );
1061 
1062 #if __TBB_TASK_ISOLATION
1063  if ( tasks_omitted ) {
1064  if ( task_pool_empty ) {
1065  // All tasks have been checked. The task pool should be in reset state.
1066  // We just restore the bounds for the available tasks.
1067  // TODO: Does it have sense to move them to the beginning of the task pool?
1069  if ( result ) {
1070  // If we have a task, it should be at H0 position.
1071  __TBB_ASSERT( H0 == T, NULL );
1072  ++H0;
1073  }
1074  __TBB_ASSERT( H0 <= T0, NULL );
1075  if ( H0 < T0 ) {
1076  // Restore the task pool if there are some tasks.
1079  // The release fence is used in publish_task_pool.
1081  // Synchronize with snapshot as we published some tasks.
1083  }
1084  } else {
1085  // A task has been obtained. We need to make a hole in position T.
1087  __TBB_ASSERT( result, NULL );
1088  my_arena_slot->task_pool_ptr[T] = NULL;
1090  // Synchronize with snapshot as we published some tasks.
1091  // TODO: consider some approach not to call wakeup for each time. E.g. check if the tail reached the head.
1093  }
1094 
1095  // Now it is safe to call note_affinity because the task pool is restored.
1096  if ( my_innermost_running_task == result ) {
1097  assert_task_valid( result );
1098  result->note_affinity( my_affinity_id );
1099  }
1100  }
1101 #endif /* __TBB_TASK_ISOLATION */
1102  __TBB_ASSERT( (intptr_t)__TBB_load_relaxed( my_arena_slot->tail ) >= 0, NULL );
1103  __TBB_ASSERT( result || __TBB_ISOLATION_EXPR( tasks_omitted || ) is_quiescent_local_task_pool_reset(), NULL );
1104  return result;
1105 } // generic_scheduler::get_task

References __TBB_ASSERT, __TBB_control_consistency_helper, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), acquire_task_pool(), tbb::internal::arena::advertise_new_work(), tbb::internal::assert_task_valid(), tbb::atomic_fence(), tbb::internal::arena_slot_line1::head, is_quiescent_local_task_pool_reset(), is_task_pool_published(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, tbb::task::note_affinity(), tbb::internal::poison_pointer(), publish_task_pool(), release_task_pool(), reset_task_pool_and_leave(), tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line2::task_pool_ptr, and tbb::internal::arena::wakeup.

Here is the call graph for this function:

◆ get_task() [2/2]

task * tbb::internal::generic_scheduler::get_task ( size_t  T)
inline

Get a task from the local pool at specified location T.

Returns the pointer to the task or NULL if the task cannot be executed, e.g. proxy has been deallocated or isolation constraint is not met. tasks_omitted tells if some tasks have been omitted. Called only by the pool owner. The caller should guarantee that the position T is not available for a thief.

Definition at line 959 of file scheduler.cpp.

961 {
963  || is_local_task_pool_quiescent(), "Is it safe to get a task at position T?" );
964 
965  task* result = my_arena_slot->task_pool_ptr[T];
966  __TBB_ASSERT( !is_poisoned( result ), "The poisoned task is going to be processed" );
967 #if __TBB_TASK_ISOLATION
968  if ( !result )
969  return NULL;
970 
971  bool omit = isolation != no_isolation && isolation != result->prefix().isolation;
972  if ( !omit && !is_proxy( *result ) )
973  return result;
974  else if ( omit ) {
975  tasks_omitted = true;
976  return NULL;
977  }
978 #else
980  if ( !result || !is_proxy( *result ) )
981  return result;
982 #endif /* __TBB_TASK_ISOLATION */
983 
984  task_proxy& tp = static_cast<task_proxy&>(*result);
985  if ( task *t = tp.extract_task<task_proxy::pool_bit>() ) {
986  GATHER_STATISTIC( ++my_counters.proxies_executed );
987  // Following assertion should be true because TBB 2.0 tasks never specify affinity, and hence are not proxied.
988  __TBB_ASSERT( is_version_3_task( *t ), "backwards compatibility with TBB 2.0 broken" );
989  my_innermost_running_task = t; // prepare for calling note_affinity()
990 #if __TBB_TASK_ISOLATION
991  // Task affinity has changed. Postpone calling note_affinity because the task pool is in invalid state.
992  if ( !tasks_omitted )
993 #endif /* __TBB_TASK_ISOLATION */
994  {
996  t->note_affinity( my_affinity_id );
997  }
998  return t;
999  }
1000 
1001  // Proxy was empty, so it's our responsibility to free it
1002  free_task<small_task>( tp );
1003 #if __TBB_TASK_ISOLATION
1004  if ( tasks_omitted )
1005  my_arena_slot->task_pool_ptr[T] = NULL;
1006 #endif /* __TBB_TASK_ISOLATION */
1007  return NULL;
1008 }

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::task_proxy::extract_task(), GATHER_STATISTIC, tbb::internal::task_prefix::isolation, tbb::internal::no_isolation, tbb::internal::poison_pointer(), tbb::internal::task_proxy::pool_bit, and tbb::task::prefix().

Here is the call graph for this function:

◆ init_stack_info()

void tbb::internal::generic_scheduler::init_stack_info ( )

Sets up the data necessary for the stealing limiting heuristics.

Definition at line 158 of file scheduler.cpp.

158  {
159  // Stacks are growing top-down. Highest address is called "stack base",
160  // and the lowest is "stack limit".
161  __TBB_ASSERT( !my_stealing_threshold, "Stealing threshold has already been calculated" );
162  size_t stack_size = my_market->worker_stack_size();
163 #if USE_WINTHREAD
164 #if defined(_MSC_VER)&&_MSC_VER<1400 && !_WIN64
165  NT_TIB *pteb;
166  __asm mov eax, fs:[0x18]
167  __asm mov pteb, eax
168 #else
169  NT_TIB *pteb = (NT_TIB*)NtCurrentTeb();
170 #endif
171  __TBB_ASSERT( &pteb < pteb->StackBase && &pteb > pteb->StackLimit, "invalid stack info in TEB" );
172  __TBB_ASSERT( stack_size >0, "stack_size not initialized?" );
173  // When a thread is created with the attribute STACK_SIZE_PARAM_IS_A_RESERVATION, stack limit
174  // in the TIB points to the committed part of the stack only. This renders the expression
175  // "(uintptr_t)pteb->StackBase / 2 + (uintptr_t)pteb->StackLimit / 2" virtually useless.
176  // Thus for worker threads we use the explicit stack size we used while creating them.
177  // And for master threads we rely on the following fact and assumption:
178  // - the default stack size of a master thread on Windows is 1M;
179  // - if it was explicitly set by the application it is at least as large as the size of a worker stack.
180  if ( is_worker() || stack_size < MByte )
181  my_stealing_threshold = (uintptr_t)pteb->StackBase - stack_size / 2;
182  else
183  my_stealing_threshold = (uintptr_t)pteb->StackBase - MByte / 2;
184 #else /* USE_PTHREAD */
185  // There is no portable way to get stack base address in Posix, so we use
186  // non-portable method (on all modern Linux) or the simplified approach
187  // based on the common sense assumptions. The most important assumption
188  // is that the main thread's stack size is not less than that of other threads.
189  // See also comment 3 at the end of this file
190  void *stack_base = &stack_size;
191 #if __linux__ && !__bg__
192 #if __TBB_ipf
193  void *rsb_base = __TBB_get_bsp();
194 #endif
195  size_t np_stack_size = 0;
196  // Points to the lowest addressable byte of a stack.
197  void *stack_limit = NULL;
198 
199 #if __TBB_PREVIEW_RESUMABLE_TASKS
200  if ( !my_properties.genuine ) {
201  stack_limit = my_co_context.get_stack_limit();
202  __TBB_ASSERT( (uintptr_t)stack_base > (uintptr_t)stack_limit, "stack size must be positive" );
203  // Size of the stack free part
204  stack_size = size_t((char*)stack_base - (char*)stack_limit);
205  }
206 #endif
207 
208  pthread_attr_t np_attr_stack;
209  if( !stack_limit && 0 == pthread_getattr_np(pthread_self(), &np_attr_stack) ) {
210  if ( 0 == pthread_attr_getstack(&np_attr_stack, &stack_limit, &np_stack_size) ) {
211 #if __TBB_ipf
212  pthread_attr_t attr_stack;
213  if ( 0 == pthread_attr_init(&attr_stack) ) {
214  if ( 0 == pthread_attr_getstacksize(&attr_stack, &stack_size) ) {
215  if ( np_stack_size < stack_size ) {
216  // We are in a secondary thread. Use reliable data.
217  // IA-64 architecture stack is split into RSE backup and memory parts
218  rsb_base = stack_limit;
219  stack_size = np_stack_size/2;
220  // Limit of the memory part of the stack
221  stack_limit = (char*)stack_limit + stack_size;
222  }
223  // We are either in the main thread or this thread stack
224  // is bigger that that of the main one. As we cannot discern
225  // these cases we fall back to the default (heuristic) values.
226  }
227  pthread_attr_destroy(&attr_stack);
228  }
229  // IA-64 architecture stack is split into RSE backup and memory parts
230  my_rsb_stealing_threshold = (uintptr_t)((char*)rsb_base + stack_size/2);
231 #endif /* __TBB_ipf */
232  // TODO: pthread_attr_getstack cannot be used with Intel(R) Cilk(TM) Plus
233  // __TBB_ASSERT( (uintptr_t)stack_base > (uintptr_t)stack_limit, "stack size must be positive" );
234  // Size of the stack free part
235  stack_size = size_t((char*)stack_base - (char*)stack_limit);
236  }
237  pthread_attr_destroy(&np_attr_stack);
238  }
239 #endif /* __linux__ */
240  __TBB_ASSERT( stack_size>0, "stack size must be positive" );
241  my_stealing_threshold = (uintptr_t)((char*)stack_base - stack_size/2);
242 #endif /* USE_PTHREAD */
243 }

References __TBB_ASSERT, __TBB_get_bsp(), is_worker(), tbb::internal::MByte, my_market, tbb::internal::scheduler_state::my_properties, my_stealing_threshold, and tbb::internal::market::worker_stack_size().

Here is the call graph for this function:

◆ is_local_task_pool_quiescent()

bool tbb::internal::generic_scheduler::is_local_task_pool_quiescent ( ) const
inline

Definition at line 633 of file scheduler.h.

633  {
635  task** tp = my_arena_slot->task_pool;
636  return tp == EmptyTaskPool || tp == LockedTaskPool;
637 }

References __TBB_ASSERT, EmptyTaskPool, LockedTaskPool, tbb::internal::scheduler_state::my_arena_slot, and tbb::internal::arena_slot_line1::task_pool.

Referenced by commit_relocated_tasks(), is_quiescent_local_task_pool_empty(), and is_quiescent_local_task_pool_reset().

Here is the caller graph for this function:

◆ is_proxy()

static bool tbb::internal::generic_scheduler::is_proxy ( const task t)
inlinestatic

True if t is a task_proxy.

Definition at line 348 of file scheduler.h.

348  {
349  return t.prefix().extra_state==es_task_proxy;
350  }

References tbb::internal::es_task_proxy, tbb::internal::task_prefix::extra_state, and tbb::task::prefix().

Referenced by steal_task(), and steal_task_from().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_quiescent_local_task_pool_empty()

bool tbb::internal::generic_scheduler::is_quiescent_local_task_pool_empty ( ) const
inline

Definition at line 639 of file scheduler.h.

639  {
640  __TBB_ASSERT( is_local_task_pool_quiescent(), "Task pool is not quiescent" );
642 }

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::arena_slot_line1::head, is_local_task_pool_quiescent(), tbb::internal::scheduler_state::my_arena_slot, and tbb::internal::arena_slot_line2::tail.

Referenced by leave_task_pool().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_quiescent_local_task_pool_reset()

bool tbb::internal::generic_scheduler::is_quiescent_local_task_pool_reset ( ) const
inline

Definition at line 644 of file scheduler.h.

644  {
645  __TBB_ASSERT( is_local_task_pool_quiescent(), "Task pool is not quiescent" );
647 }

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::arena_slot_line1::head, is_local_task_pool_quiescent(), tbb::internal::scheduler_state::my_arena_slot, and tbb::internal::arena_slot_line2::tail.

Referenced by get_task(), and prepare_task_pool().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_task_pool_published()

bool tbb::internal::generic_scheduler::is_task_pool_published ( ) const
inline

Definition at line 628 of file scheduler.h.

628  {
631 }

References __TBB_ASSERT, EmptyTaskPool, tbb::internal::scheduler_state::my_arena_slot, and tbb::internal::arena_slot_line1::task_pool.

Referenced by acquire_task_pool(), cleanup_master(), get_task(), leave_task_pool(), local_spawn(), prepare_task_pool(), and release_task_pool().

Here is the caller graph for this function:

◆ is_version_3_task()

static bool tbb::internal::generic_scheduler::is_version_3_task ( task t)
inlinestatic

Definition at line 146 of file scheduler.h.

146  {
147 #if __TBB_PREVIEW_CRITICAL_TASKS
148  return (t.prefix().extra_state & 0x7)>=0x1;
149 #else
150  return (t.prefix().extra_state & 0x0F)>=0x1;
151 #endif
152  }

References tbb::internal::task_prefix::extra_state, and tbb::task::prefix().

Referenced by prepare_for_spawning(), and steal_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_worker()

bool tbb::internal::generic_scheduler::is_worker ( ) const
inline

True if running on a worker thread, false otherwise.

Definition at line 673 of file scheduler.h.

673  {
675 }

References tbb::internal::scheduler_state::my_properties, tbb::internal::scheduler_properties::type, and tbb::internal::scheduler_properties::worker.

Referenced by tbb::internal::market::cleanup(), init_stack_info(), master_outermost_level(), nested_arena_entry(), nested_arena_exit(), and worker_outermost_level().

Here is the caller graph for this function:

◆ leave_task_pool()

void tbb::internal::generic_scheduler::leave_task_pool ( )
inline

Leave the task pool.

Leaving task pool automatically releases the task pool if it is locked.

Definition at line 1258 of file scheduler.cpp.

1258  {
1259  __TBB_ASSERT( is_task_pool_published(), "Not in arena" );
1260  // Do not reset my_arena_index. It will be used to (attempt to) re-acquire the slot next time
1261  __TBB_ASSERT( &my_arena->my_slots[my_arena_index] == my_arena_slot, "arena slot and slot index mismatch" );
1262  __TBB_ASSERT ( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when leaving arena" );
1263  __TBB_ASSERT ( is_quiescent_local_task_pool_empty(), "Cannot leave arena when the task pool is not empty" );
1265  // No release fence is necessary here as this assignment precludes external
1266  // accesses to the local task pool when becomes visible. Thus it is harmless
1267  // if it gets hoisted above preceding local bookkeeping manipulations.
1269 }

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), EmptyTaskPool, is_quiescent_local_task_pool_empty(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, sync_releasing, and tbb::internal::arena_slot_line1::task_pool.

Referenced by cleanup_master(), and reset_task_pool_and_leave().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_spawn()

void tbb::internal::generic_scheduler::local_spawn ( task first,
task *&  next 
)

Conceptually, this method should be a member of class scheduler. But doing so would force us to publish class scheduler in the headers.

Definition at line 651 of file scheduler.cpp.

651  {
652  __TBB_ASSERT( first, NULL );
653  __TBB_ASSERT( governor::is_set(this), NULL );
654 #if __TBB_TODO
655  // We need to consider capping the max task pool size and switching
656  // to in-place task execution whenever it is reached.
657 #endif
658  if ( &first->prefix().next == &next ) {
659  // Single task is being spawned
660 #if __TBB_TODO
661  // TODO:
662  // In the future we need to add overloaded spawn method for a single task,
663  // and a method accepting an array of task pointers (we may also want to
664  // change the implementation of the task_list class). But since such changes
665  // may affect the binary compatibility, we postpone them for a while.
666 #endif
667 #if __TBB_PREVIEW_CRITICAL_TASKS
668  if( !handled_as_critical( *first ) )
669 #endif
670  {
671  size_t T = prepare_task_pool( 1 );
673  commit_spawned_tasks( T + 1 );
674  if ( !is_task_pool_published() )
676  }
677  }
678  else {
679  // Task list is being spawned
680 #if __TBB_TODO
681  // TODO: add task_list::front() and implement&document the local execution ordering which is
682  // opposite to the current implementation. The idea is to remove hackish fast_reverse_vector
683  // and use push_back/push_front when accordingly LIFO and FIFO order of local execution is
684  // desired. It also requires refactoring of the reload_tasks method and my_offloaded_tasks list.
685  // Additional benefit may come from adding counter to the task_list so that it can reserve enough
686  // space in the task pool in advance and move all the tasks directly without any intermediate
687  // storages. But it requires dealing with backward compatibility issues and still supporting
688  // counter-less variant (though not necessarily fast implementation).
689 #endif
690  task *arr[min_task_pool_size];
691  fast_reverse_vector<task*> tasks(arr, min_task_pool_size);
692  task *t_next = NULL;
693  for( task* t = first; ; t = t_next ) {
694  // If t is affinitized to another thread, it may already be executed
695  // and destroyed by the time prepare_for_spawning returns.
696  // So milk it while it is alive.
697  bool end = &t->prefix().next == &next;
698  t_next = t->prefix().next;
699 #if __TBB_PREVIEW_CRITICAL_TASKS
700  if( !handled_as_critical( *t ) )
701 #endif
702  tasks.push_back( prepare_for_spawning(t) );
703  if( end )
704  break;
705  }
706  if( size_t num_tasks = tasks.size() ) {
707  size_t T = prepare_task_pool( num_tasks );
708  tasks.copy_memory( my_arena_slot->task_pool_ptr + T );
709  commit_spawned_tasks( T + num_tasks );
710  if ( !is_task_pool_published() )
712  }
713  }
716 }

References __TBB_ASSERT, tbb::internal::arena::advertise_new_work(), assert_task_pool_valid(), commit_spawned_tasks(), tbb::internal::fast_reverse_vector< T, max_segments >::copy_memory(), end, tbb::internal::first(), tbb::internal::governor::is_set(), is_task_pool_published(), min_task_pool_size, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::task_prefix::next, tbb::task::prefix(), prepare_for_spawning(), prepare_task_pool(), publish_task_pool(), tbb::internal::fast_reverse_vector< T, max_segments >::push_back(), tbb::internal::fast_reverse_vector< T, max_segments >::size(), tbb::internal::arena_slot_line2::task_pool_ptr, and tbb::internal::arena::work_spawned.

Referenced by local_spawn_root_and_wait(), spawn(), and tbb::internal::custom_scheduler< SchedulerTraits >::tally_completion_of_predecessor().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_spawn_root_and_wait()

void tbb::internal::generic_scheduler::local_spawn_root_and_wait ( task first,
task *&  next 
)

Definition at line 718 of file scheduler.cpp.

718  {
719  __TBB_ASSERT( governor::is_set(this), NULL );
720  __TBB_ASSERT( first, NULL );
721  auto_empty_task dummy( __TBB_CONTEXT_ARG(this, first->prefix().context) );
723  for( task* t=first; ; t=t->prefix().next ) {
724  ++n;
725  __TBB_ASSERT( !t->prefix().parent, "not a root task, or already running" );
726  t->prefix().parent = &dummy;
727  if( &t->prefix().next==&next ) break;
728 #if __TBB_TASK_GROUP_CONTEXT
729  __TBB_ASSERT( t->prefix().context == t->prefix().next->prefix().context,
730  "all the root tasks in list must share the same context");
731 #endif /* __TBB_TASK_GROUP_CONTEXT */
732  }
733  dummy.prefix().ref_count = n+1;
734  if( n>1 )
735  local_spawn( first->prefix().next, next );
736  local_wait_for_all( dummy, first );
737 }

References __TBB_ASSERT, __TBB_CONTEXT_ARG, tbb::internal::task_prefix::context, tbb::internal::first(), tbb::internal::governor::is_set(), local_spawn(), local_wait_for_all(), tbb::internal::task_prefix::next, tbb::internal::task_prefix::parent, tbb::internal::auto_empty_task::prefix(), tbb::task::prefix(), and tbb::internal::task_prefix::ref_count.

Referenced by spawn_root_and_wait().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_wait_for_all()

virtual void tbb::internal::generic_scheduler::local_wait_for_all ( task parent,
task child 
)
pure virtual

Implemented in tbb::internal::custom_scheduler< SchedulerTraits >.

Referenced by cleanup_master(), local_spawn_root_and_wait(), and wait_until_empty().

Here is the caller graph for this function:

◆ lock_task_pool()

task ** tbb::internal::generic_scheduler::lock_task_pool ( arena_slot victim_arena_slot) const
inline

Locks victim's task pool, and returns pointer to it. The pointer can be NULL.

Garbles victim_arena_slot->task_pool for the duration of the lock.

ATTENTION: This method is mostly the same as generic_scheduler::acquire_task_pool(), with a little different logic of slot state checks (slot can be empty, locked or point to any task pool other than ours, and asynchronous transitions between all these states are possible). Thus if any of them is changed, consider changing the counterpart as well

Definition at line 537 of file scheduler.cpp.

537  {
538  task** victim_task_pool;
539  bool sync_prepare_done = false;
540  for( atomic_backoff backoff;; /*backoff pause embedded in the loop*/) {
541  victim_task_pool = victim_arena_slot->task_pool;
542  // NOTE: Do not use comparison of head and tail indices to check for
543  // the presence of work in the victim's task pool, as they may give
544  // incorrect indication because of task pool relocations and resizes.
545  if ( victim_task_pool == EmptyTaskPool ) {
546  // The victim thread emptied its task pool - nothing to lock
547  if( sync_prepare_done )
548  ITT_NOTIFY(sync_cancel, victim_arena_slot);
549  break;
550  }
551  if( victim_task_pool != LockedTaskPool &&
552  as_atomic(victim_arena_slot->task_pool).compare_and_swap(LockedTaskPool, victim_task_pool ) == victim_task_pool )
553  {
554  // We've locked victim's task pool
555  ITT_NOTIFY(sync_acquired, victim_arena_slot);
556  break;
557  }
558  else if( !sync_prepare_done ) {
559  // Start waiting
560  ITT_NOTIFY(sync_prepare, victim_arena_slot);
561  sync_prepare_done = true;
562  }
563  GATHER_STATISTIC( ++my_counters.thieves_conflicts );
564  // Someone else acquired a lock, so pause and do exponential backoff.
565 #if __TBB_STEALING_ABORT_ON_CONTENTION
566  if(!backoff.bounded_pause()) {
567  // the 16 was acquired empirically and a theory behind it supposes
568  // that number of threads becomes much bigger than number of
569  // tasks which can be spawned by one thread causing excessive contention.
570  // TODO: However even small arenas can benefit from the abort on contention
571  // if preemption of a thief is a problem
572  if(my_arena->my_limit >= 16)
573  return EmptyTaskPool;
574  __TBB_Yield();
575  }
576 #else
577  backoff.pause();
578 #endif
579  }
580  __TBB_ASSERT( victim_task_pool == EmptyTaskPool ||
581  (victim_arena_slot->task_pool == LockedTaskPool && victim_task_pool != LockedTaskPool),
582  "not really locked victim's task pool?" );
583  return victim_task_pool;
584 } // generic_scheduler::lock_task_pool

References __TBB_ASSERT, __TBB_Yield, tbb::internal::as_atomic(), EmptyTaskPool, GATHER_STATISTIC, ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::arena_base::my_limit, sync_cancel, and tbb::internal::arena_slot_line1::task_pool.

Referenced by steal_task_from().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ master_outermost_level()

bool tbb::internal::generic_scheduler::master_outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level in a master thread.

Returns true when this scheduler instance is associated with an application thread, and is not executing any TBB task. This includes being in a TBB dispatch loop (one of wait_for_all methods) invoked directly from that thread.

Definition at line 653 of file scheduler.h.

653  {
654  return !is_worker() && outermost_level();
655 }

References is_worker(), and outermost_level().

Here is the call graph for this function:

◆ max_threads_in_arena()

unsigned tbb::internal::generic_scheduler::max_threads_in_arena ( )
inline

Returns the concurrency limit of the current arena.

Definition at line 677 of file scheduler.h.

677  {
678  __TBB_ASSERT(my_arena, NULL);
679  return my_arena->my_num_slots;
680 }

References __TBB_ASSERT, tbb::internal::scheduler_state::my_arena, and tbb::internal::arena_base::my_num_slots.

Referenced by tbb::internal::get_initial_auto_partitioner_divisor(), and tbb::internal::affinity_partitioner_base_v3::resize().

Here is the caller graph for this function:

◆ nested_arena_entry()

void tbb::internal::generic_scheduler::nested_arena_entry ( arena a,
size_t  slot_index 
)

Definition at line 729 of file arena.cpp.

729  {
730  __TBB_ASSERT( is_alive(a->my_guard), NULL );
731  __TBB_ASSERT( a!=my_arena, NULL);
732 
733  // overwrite arena settings
734 #if __TBB_TASK_PRIORITY
735  if ( my_offloaded_tasks )
736  my_arena->orphan_offloaded_tasks( *this );
737  my_offloaded_tasks = NULL;
738 #endif /* __TBB_TASK_PRIORITY */
739  attach_arena( a, slot_index, /*is_master*/true );
740  __TBB_ASSERT( my_arena == a, NULL );
742  // TODO? ITT_NOTIFY(sync_acquired, a->my_slots + index);
743  // TODO: it requires market to have P workers (not P-1)
744  // TODO: a preempted worker should be excluded from assignment to other arenas e.g. my_slack--
745  if( !is_worker() && slot_index >= my_arena->my_num_reserved_slots )
747 #if __TBB_ARENA_OBSERVER
748  my_last_local_observer = 0; // TODO: try optimize number of calls
749  my_arena->my_observers.notify_entry_observers( my_last_local_observer, /*worker=*/false );
750 #endif
751 #if __TBB_PREVIEW_RESUMABLE_TASKS
752  my_wait_task = NULL;
753 #endif
754 }

References __TBB_ASSERT, tbb::internal::market::adjust_demand(), tbb::internal::governor::assume_scheduler(), attach_arena(), is_worker(), tbb::internal::scheduler_state::my_arena, tbb::internal::arena_base::my_market, and tbb::internal::arena_base::my_num_reserved_slots.

Here is the call graph for this function:

◆ nested_arena_exit()

void tbb::internal::generic_scheduler::nested_arena_exit ( )

Definition at line 756 of file arena.cpp.

756  {
757 #if __TBB_ARENA_OBSERVER
758  my_arena->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
759 #endif /* __TBB_ARENA_OBSERVER */
760 #if __TBB_TASK_PRIORITY
761  if ( my_offloaded_tasks )
762  my_arena->orphan_offloaded_tasks( *this );
763 #endif
766  // Free the master slot.
767  __TBB_ASSERT(my_arena->my_slots[my_arena_index].my_scheduler, "A slot is already empty");
769  my_arena->my_exit_monitors.notify_one(); // do not relax!
770 }

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), tbb::internal::market::adjust_demand(), is_worker(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::arena_base::my_exit_monitors, tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::arena::my_slots, and tbb::internal::concurrent_monitor::notify_one().

Referenced by tbb::internal::nested_arena_context::~nested_arena_context().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ outermost_level()

bool tbb::internal::generic_scheduler::outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level.

Definition at line 649 of file scheduler.h.

649  {
650  return my_properties.outermost;
651 }

References tbb::internal::scheduler_state::my_properties, and tbb::internal::scheduler_properties::outermost.

Referenced by master_outermost_level(), and worker_outermost_level().

Here is the caller graph for this function:

◆ plugged_return_list()

static task* tbb::internal::generic_scheduler::plugged_return_list ( )
inlinestatic

Special value used to mark my_return_list as not taking any more entries.

Definition at line 458 of file scheduler.h.

458 {return (task*)(intptr_t)(-1);}

Referenced by cleanup_scheduler(), and free_nonlocal_small_task().

Here is the caller graph for this function:

◆ prepare_for_spawning()

task * tbb::internal::generic_scheduler::prepare_for_spawning ( task t)
inline

Checks if t is affinitized to another thread, and if so, bundles it as proxy.

Returns either t or proxy containing t.

Definition at line 595 of file scheduler.cpp.

595  {
596  __TBB_ASSERT( t->state()==task::allocated, "attempt to spawn task that is not in 'allocated' state" );
597  t->prefix().state = task::ready;
598 #if TBB_USE_ASSERT
599  if( task* parent = t->parent() ) {
600  internal::reference_count ref_count = parent->prefix().ref_count;
601  __TBB_ASSERT( ref_count>=0, "attempt to spawn task whose parent has a ref_count<0" );
602  __TBB_ASSERT( ref_count!=0, "attempt to spawn task whose parent has a ref_count==0 (forgot to set_ref_count?)" );
603  parent->prefix().extra_state |= es_ref_count_active;
604  }
605 #endif /* TBB_USE_ASSERT */
606  affinity_id dst_thread = t->prefix().affinity;
607  __TBB_ASSERT( dst_thread == 0 || is_version_3_task(*t),
608  "backwards compatibility to TBB 2.0 tasks is broken" );
609 #if __TBB_TASK_ISOLATION
611  t->prefix().isolation = isolation;
612 #endif /* __TBB_TASK_ISOLATION */
613  if( dst_thread != 0 && dst_thread != my_affinity_id ) {
614  task_proxy& proxy = (task_proxy&)allocate_task( sizeof(task_proxy),
615  __TBB_CONTEXT_ARG(NULL, NULL) );
616  // Mark as a proxy
617  proxy.prefix().extra_state = es_task_proxy;
618  proxy.outbox = &my_arena->mailbox(dst_thread);
619  // Mark proxy as present in both locations (sender's task pool and destination mailbox)
620  proxy.task_and_tag = intptr_t(t) | task_proxy::location_mask;
621 #if __TBB_TASK_PRIORITY
622  poison_pointer( proxy.prefix().context );
623 #endif /* __TBB_TASK_PRIORITY */
624  __TBB_ISOLATION_EXPR( proxy.prefix().isolation = isolation );
625  ITT_NOTIFY( sync_releasing, proxy.outbox );
626  // Mail the proxy - after this point t may be destroyed by another thread at any moment.
627  proxy.outbox->push(&proxy);
628  return &proxy;
629  }
630  return t;
631 }

References __TBB_ASSERT, __TBB_CONTEXT_ARG, __TBB_ISOLATION_EXPR, tbb::internal::task_prefix::affinity, allocate_task(), tbb::task::allocated, tbb::internal::task_prefix::context, tbb::internal::es_ref_count_active, tbb::internal::es_task_proxy, tbb::internal::task_prefix::extra_state, is_version_3_task(), tbb::internal::task_prefix::isolation, ITT_NOTIFY, tbb::internal::task_proxy::location_mask, tbb::internal::arena::mailbox(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::task_proxy::outbox, parent, tbb::task::parent(), tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::mail_outbox::push(), tbb::task::ready, tbb::internal::task_prefix::state, tbb::task::state(), sync_releasing, and tbb::internal::task_proxy::task_and_tag.

Referenced by local_spawn().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ prepare_task_pool()

size_t tbb::internal::generic_scheduler::prepare_task_pool ( size_t  n)
inline

Makes sure that the task pool can accommodate at least n more elements.

If necessary relocates existing task pointers or grows the ready task deque. Returns (possible updated) tail index (not accounting for n).

Definition at line 439 of file scheduler.cpp.

439  {
440  size_t T = __TBB_load_relaxed(my_arena_slot->tail); // mirror
441  if ( T + num_tasks <= my_arena_slot->my_task_pool_size )
442  return T;
443 
444  size_t new_size = num_tasks;
445 
449  if ( num_tasks < min_task_pool_size ) new_size = min_task_pool_size;
451  return 0;
452  }
453 
455  size_t H = __TBB_load_relaxed( my_arena_slot->head ); // mirror
456  task** task_pool = my_arena_slot->task_pool_ptr;;
458  // Count not skipped tasks. Consider using std::count_if.
459  for ( size_t i = H; i < T; ++i )
460  if ( task_pool[i] ) ++new_size;
461  // If the free space at the beginning of the task pool is too short, we
462  // are likely facing a pathological single-producer-multiple-consumers
463  // scenario, and thus it's better to expand the task pool
465  if ( allocate ) {
466  // Grow task pool. As this operation is rare, and its cost is asymptotically
467  // amortizable, we can tolerate new task pool allocation done under the lock.
468  if ( new_size < 2 * my_arena_slot->my_task_pool_size )
470  my_arena_slot->allocate_task_pool( new_size ); // updates my_task_pool_size
471  }
472  // Filter out skipped tasks. Consider using std::copy_if.
473  size_t T1 = 0;
474  for ( size_t i = H; i < T; ++i )
475  if ( task_pool[i] )
476  my_arena_slot->task_pool_ptr[T1++] = task_pool[i];
477  // Deallocate the previous task pool if a new one has been allocated.
478  if ( allocate )
479  NFS_Free( task_pool );
480  else
482  // Publish the new state.
485  return T1;
486 }

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), acquire_task_pool(), tbb::internal::arena_slot::allocate_task_pool(), assert_task_pool_valid(), commit_relocated_tasks(), tbb::internal::arena_slot::fill_with_canary_pattern(), tbb::internal::arena_slot_line1::head, is_quiescent_local_task_pool_reset(), is_task_pool_published(), min_task_pool_size, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena_slot_line2::my_task_pool_size, new_size, tbb::internal::NFS_Free(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by local_spawn().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ publish_task_pool()

void tbb::internal::generic_scheduler::publish_task_pool ( )
inline

Used by workers to enter the task pool.

Does not lock the task pool in case if arena slot has been successfully grabbed.

Definition at line 1246 of file scheduler.cpp.

1246  {
1247  __TBB_ASSERT ( my_arena, "no arena: initialization not completed?" );
1248  __TBB_ASSERT ( my_arena_index < my_arena->my_num_slots, "arena slot index is out-of-bound" );
1250  __TBB_ASSERT ( my_arena_slot->task_pool == EmptyTaskPool, "someone else grabbed my arena slot?" );
1252  "entering arena without tasks to share" );
1253  // Release signal on behalf of previously spawned tasks (when this thread was not in arena yet)
1256 }

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_with_release(), EmptyTaskPool, tbb::internal::arena_slot_line1::head, ITT_NOTIFY, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, sync_releasing, tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by get_task(), and local_spawn().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ receive_or_steal_task()

virtual task* tbb::internal::generic_scheduler::receive_or_steal_task ( __TBB_ISOLATION_ARG(__TBB_atomic reference_count &completion_ref_count, isolation_tag isolation)  )
pure virtual

Try getting a task from other threads (via mailbox, stealing, FIFO queue, orphans adoption).

Returns obtained task or NULL if all attempts fail.

Implemented in tbb::internal::custom_scheduler< SchedulerTraits >.

◆ release_task_pool()

void tbb::internal::generic_scheduler::release_task_pool ( ) const
inline

Unlocks the local task pool.

Restores my_arena_slot->task_pool munged by acquire_task_pool. Requires correctly set my_arena_slot->task_pool_ptr.

Definition at line 522 of file scheduler.cpp.

522  {
523  if ( !is_task_pool_published() )
524  return; // we are not in arena - nothing to unlock
525  __TBB_ASSERT( my_arena_slot, "we are not in arena" );
526  __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "arena slot is not locked" );
529 }

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena_slot, sync_releasing, tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by cleanup_master(), commit_relocated_tasks(), and get_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ reset_task_pool_and_leave()

void tbb::internal::generic_scheduler::reset_task_pool_and_leave ( )
inline

Resets head and tail indices to 0, and leaves task pool.

The task pool must be locked by the owner (via acquire_task_pool).

Definition at line 702 of file scheduler.h.

702  {
703  __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when resetting task pool" );
706  leave_task_pool();
707 }

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), tbb::internal::arena_slot_line1::head, leave_task_pool(), LockedTaskPool, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line1::task_pool.

Referenced by get_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ spawn()

void tbb::internal::generic_scheduler::spawn ( task first,
task *&  next 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 739 of file scheduler.cpp.

739  {
741 }

References tbb::internal::first(), tbb::internal::governor::local_scheduler(), and local_spawn().

Here is the call graph for this function:

◆ spawn_root_and_wait()

void tbb::internal::generic_scheduler::spawn_root_and_wait ( task first,
task *&  next 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 743 of file scheduler.cpp.

743  {
745 }

References tbb::internal::first(), tbb::internal::governor::local_scheduler(), and local_spawn_root_and_wait().

Here is the call graph for this function:

◆ steal_task()

task * tbb::internal::generic_scheduler::steal_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )

Attempts to steal a task from a randomly chosen thread/scheduler.

Definition at line 1107 of file scheduler.cpp.

1107  {
1108  // Try to steal a task from a random victim.
1109  size_t k = my_random.get() % (my_arena->my_limit-1);
1110  arena_slot* victim = &my_arena->my_slots[k];
1111  // The following condition excludes the master that might have
1112  // already taken our previous place in the arena from the list .
1113  // of potential victims. But since such a situation can take
1114  // place only in case of significant oversubscription, keeping
1115  // the checks simple seems to be preferable to complicating the code.
1116  if( k >= my_arena_index )
1117  ++victim; // Adjusts random distribution to exclude self
1118  task **pool = victim->task_pool;
1119  task *t = NULL;
1120  if( pool == EmptyTaskPool || !(t = steal_task_from( __TBB_ISOLATION_ARG(*victim, isolation) )) )
1121  return NULL;
1122  if( is_proxy(*t) ) {
1123  task_proxy &tp = *(task_proxy*)t;
1124  t = tp.extract_task<task_proxy::pool_bit>();
1125  if ( !t ) {
1126  // Proxy was empty, so it's our responsibility to free it
1127  free_task<no_cache_small_task>(tp);
1128  return NULL;
1129  }
1130  GATHER_STATISTIC( ++my_counters.proxies_stolen );
1131  }
1132  t->prefix().extra_state |= es_task_is_stolen;
1133  if( is_version_3_task(*t) ) {
1135  t->prefix().owner = this;
1136  t->note_affinity( my_affinity_id );
1137  }
1138  GATHER_STATISTIC( ++my_counters.steals_committed );
1139  return t;
1140 }

References __TBB_ISOLATION_ARG, EmptyTaskPool, tbb::internal::es_task_is_stolen, tbb::internal::task_prefix::extra_state, tbb::internal::task_proxy::extract_task(), GATHER_STATISTIC, tbb::internal::FastRandom::get(), is_proxy(), is_version_3_task(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::arena_base::my_limit, my_random, tbb::internal::arena::my_slots, tbb::task::note_affinity(), tbb::internal::task_prefix::owner, tbb::internal::task_proxy::pool_bit, tbb::task::prefix(), steal_task_from(), and tbb::internal::arena_slot_line1::task_pool.

Here is the call graph for this function:

◆ steal_task_from()

task * tbb::internal::generic_scheduler::steal_task_from ( __TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation)  )

Steal task from another scheduler's ready pool.

Definition at line 1142 of file scheduler.cpp.

1142  {
1143  task** victim_pool = lock_task_pool( &victim_slot );
1144  if ( !victim_pool )
1145  return NULL;
1146  task* result = NULL;
1147  size_t H = __TBB_load_relaxed(victim_slot.head); // mirror
1148  size_t H0 = H;
1149  bool tasks_omitted = false;
1150  do {
1151  __TBB_store_relaxed( victim_slot.head, ++H );
1152  atomic_fence();
1153  if ( (intptr_t)H > (intptr_t)__TBB_load_relaxed( victim_slot.tail ) ) {
1154  // Stealing attempt failed, deque contents has not been changed by us
1155  GATHER_STATISTIC( ++my_counters.thief_backoffs );
1156  __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
1157  __TBB_ASSERT( !result, NULL );
1158  goto unlock;
1159  }
1160  __TBB_control_consistency_helper(); // on victim_slot.tail
1161  result = victim_pool[H-1];
1162  __TBB_ASSERT( !is_poisoned( result ), NULL );
1163 
1164  if ( result ) {
1165  __TBB_ISOLATION_EXPR( if ( isolation == no_isolation || isolation == result->prefix().isolation ) )
1166  {
1167  if ( !is_proxy( *result ) )
1168  break;
1169  task_proxy& tp = *static_cast<task_proxy*>(result);
1170  // If mailed task is likely to be grabbed by its destination thread, skip it.
1171  if ( !(task_proxy::is_shared( tp.task_and_tag ) && tp.outbox->recipient_is_idle()) )
1172  break;
1173  GATHER_STATISTIC( ++my_counters.proxies_bypassed );
1174  }
1175  // The task cannot be executed either due to isolation or proxy constraints.
1176  result = NULL;
1177  tasks_omitted = true;
1178  } else if ( !tasks_omitted ) {
1179  // Cleanup the task pool from holes until a task is skipped.
1180  __TBB_ASSERT( H0 == H-1, NULL );
1181  poison_pointer( victim_pool[H0] );
1182  H0 = H;
1183  }
1184  } while ( !result );
1185  __TBB_ASSERT( result, NULL );
1186 
1187  // emit "task was consumed" signal
1188  ITT_NOTIFY( sync_acquired, (void*)((uintptr_t)&victim_slot+sizeof( uintptr_t )) );
1189  poison_pointer( victim_pool[H-1] );
1190  if ( tasks_omitted ) {
1191  // Some proxies in the task pool have been omitted. Set the stolen task to NULL.
1192  victim_pool[H-1] = NULL;
1193  __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
1194  }
1195 unlock:
1196  unlock_task_pool( &victim_slot, victim_pool );
1197 #if __TBB_PREFETCHING
1198  __TBB_cl_evict(&victim_slot.head);
1199  __TBB_cl_evict(&victim_slot.tail);
1200 #endif
1201  if ( tasks_omitted )
1202  // Synchronize with snapshot as the head and tail can be bumped which can falsely trigger EMPTY state
1204  return result;
1205 }

References __TBB_ASSERT, __TBB_cl_evict, __TBB_control_consistency_helper, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), tbb::internal::arena::advertise_new_work(), tbb::atomic_fence(), GATHER_STATISTIC, tbb::internal::arena_slot_line1::head, is_proxy(), tbb::internal::task_proxy::is_shared(), tbb::internal::task_prefix::isolation, ITT_NOTIFY, lock_task_pool(), tbb::internal::scheduler_state::my_arena, tbb::internal::no_isolation, tbb::internal::task_proxy::outbox, tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::mail_outbox::recipient_is_idle(), tbb::internal::arena_slot_line2::tail, tbb::internal::task_proxy::task_and_tag, unlock_task_pool(), and tbb::internal::arena::wakeup.

Referenced by steal_task().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ unlock_task_pool()

void tbb::internal::generic_scheduler::unlock_task_pool ( arena_slot victim_arena_slot,
task **  victim_task_pool 
) const
inline

Unlocks victim's task pool.

Restores victim_arena_slot->task_pool munged by lock_task_pool.

Definition at line 586 of file scheduler.cpp.

587  {
588  __TBB_ASSERT( victim_arena_slot, "empty victim arena slot pointer" );
589  __TBB_ASSERT( victim_arena_slot->task_pool == LockedTaskPool, "victim arena slot is not locked" );
590  ITT_NOTIFY(sync_releasing, victim_arena_slot);
591  __TBB_store_with_release( victim_arena_slot->task_pool, victim_task_pool );
592 }

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), ITT_NOTIFY, LockedTaskPool, sync_releasing, and tbb::internal::arena_slot_line1::task_pool.

Referenced by steal_task_from().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ wait_until_empty()

void tbb::internal::generic_scheduler::wait_until_empty ( )

Definition at line 772 of file arena.cpp.

772  {
773  my_dummy_task->prefix().ref_count++; // prevents exit from local_wait_for_all when local work is done enforcing the stealing
777 }

References local_wait_for_all(), tbb::internal::scheduler_state::my_arena, my_dummy_task, tbb::internal::arena_base::my_pool_state, tbb::task::prefix(), tbb::internal::task_prefix::ref_count, and tbb::internal::arena::SNAPSHOT_EMPTY.

Here is the call graph for this function:

◆ worker_outermost_level()

bool tbb::internal::generic_scheduler::worker_outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level in a worker thread.

Definition at line 657 of file scheduler.h.

657  {
658  return is_worker() && outermost_level();
659 }

References is_worker(), and outermost_level().

Here is the call graph for this function:

Friends And Related Function Documentation

◆ custom_scheduler

template<typename SchedulerTraits >
friend class custom_scheduler
friend

Definition at line 389 of file scheduler.h.

Member Data Documentation

◆ min_task_pool_size

const size_t tbb::internal::generic_scheduler::min_task_pool_size = 64
static

Initial size of the task deque sufficient to serve without reallocation 4 nested parallel_for calls with iteration space of 65535 grains each.

Definition at line 369 of file scheduler.h.

Referenced by local_spawn(), and prepare_task_pool().

◆ my_auto_initialized

bool tbb::internal::generic_scheduler::my_auto_initialized

True if *this was created by automatic TBB initialization.

Definition at line 197 of file scheduler.h.

◆ my_dummy_task

task* tbb::internal::generic_scheduler::my_dummy_task

Fake root task created by slave threads.

The task is used as the "parent" argument to method wait_for_all.

Definition at line 186 of file scheduler.h.

Referenced by attach_arena(), cleanup_master(), cleanup_scheduler(), generic_scheduler(), tbb::internal::nested_arena_context::mimic_outermost_level(), wait_until_empty(), and tbb::internal::nested_arena_context::~nested_arena_context().

◆ my_free_list

task* tbb::internal::generic_scheduler::my_free_list

Free list of small tasks that can be reused.

Definition at line 178 of file scheduler.h.

Referenced by allocate_task(), cleanup_scheduler(), and free_task().

◆ my_market

market* tbb::internal::generic_scheduler::my_market

The market I am in.

Definition at line 172 of file scheduler.h.

Referenced by attach_arena(), cleanup_master(), cleanup_scheduler(), and init_stack_info().

◆ my_random

FastRandom tbb::internal::generic_scheduler::my_random

Random number generator used for picking a random victim from which to steal.

Definition at line 175 of file scheduler.h.

Referenced by steal_task(), and tbb::internal::custom_scheduler< SchedulerTraits >::tally_completion_of_predecessor().

◆ my_ref_count

long tbb::internal::generic_scheduler::my_ref_count

Reference count for scheduler.

Number of task_scheduler_init objects that point to this scheduler

Definition at line 190 of file scheduler.h.

◆ my_return_list

task* tbb::internal::generic_scheduler::my_return_list

List of small tasks that have been returned to this scheduler by other schedulers.

Definition at line 465 of file scheduler.h.

Referenced by allocate_task(), cleanup_scheduler(), and generic_scheduler().

◆ my_small_task_count

__TBB_atomic intptr_t tbb::internal::generic_scheduler::my_small_task_count

Number of small tasks that have been allocated by this scheduler.

Definition at line 461 of file scheduler.h.

Referenced by allocate_task(), cleanup_scheduler(), and destroy().

◆ my_stealing_threshold

uintptr_t tbb::internal::generic_scheduler::my_stealing_threshold

Position in the call stack specifying its maximal filling when stealing is still allowed.

Definition at line 155 of file scheduler.h.

Referenced by can_steal(), and init_stack_info().

◆ null_arena_index

const size_t tbb::internal::generic_scheduler::null_arena_index = ~size_t(0)
static

Definition at line 161 of file scheduler.h.

◆ quick_task_size

const size_t tbb::internal::generic_scheduler::quick_task_size = 256-task_prefix_reservation_size
static

If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd.

Definition at line 144 of file scheduler.h.

Referenced by allocate_task().


The documentation for this class was generated from the following files:
tbb::task::prefix
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:991
__TBB_cl_prefetch
#define __TBB_cl_prefetch(p)
Definition: mic_common.h:33
tbb::task::set_ref_count
void set_ref_count(int count)
Set reference count.
Definition: task.h:750
tbb::internal::mail_inbox::pop
task_proxy * pop(__TBB_ISOLATION_EXPR(isolation_tag isolation))
Get next piece of mail, or NULL if mailbox is empty.
Definition: mailbox.h:202
tbb::internal::task_proxy::pool_bit
static const intptr_t pool_bit
Definition: mailbox.h:30
tbb::internal::task_proxy::is_shared
static bool is_shared(intptr_t tat)
True if the proxy is stored both in its sender's pool and in the destination mailbox.
Definition: mailbox.h:46
tbb::internal::MByte
const size_t MByte
Definition: tbb_misc.h:45
tbb::internal::generic_scheduler::local_spawn
void local_spawn(task *first, task *&next)
Definition: scheduler.cpp:651
end
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp end
Definition: ittnotify_static.h:182
__TBB_ISOLATION_EXPR
#define __TBB_ISOLATION_EXPR(isolation)
Definition: scheduler_common.h:67
__TBB_ASSERT
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
id
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id id
Definition: ittnotify_static.h:172
tbb::internal::generic_scheduler::my_return_list
task * my_return_list
List of small tasks that have been returned to this scheduler by other schedulers.
Definition: scheduler.h:465
LockedTaskPool
#define LockedTaskPool
Definition: scheduler.h:47
tbb::internal::arena_slot_line2::task_pool_ptr
task **__TBB_atomic task_pool_ptr
Task pool of the scheduler that owns this slot.
Definition: scheduler_common.h:369
ITT_NOTIFY
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:112
tbb::internal::arena::wakeup
Definition: arena.h:286
tbb::internal::scheduler_state::my_affinity_id
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:99
__TBB_cl_evict
#define __TBB_cl_evict(p)
Definition: mic_common.h:34
tbb::internal::market::global_market
static market & global_market(bool is_public, unsigned max_num_workers=0, size_t stack_size=0)
Factory method creating new market object.
Definition: market.cpp:96
tbb::internal::generic_scheduler::my_dummy_task
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:186
tbb::internal::poison_pointer
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
tbb::internal::generic_scheduler::reset_task_pool_and_leave
void reset_task_pool_and_leave()
Resets head and tail indices to 0, and leaves task pool.
Definition: scheduler.h:702
tbb::internal::arena::ref_external
static const unsigned ref_external
Reference increment values for externals and workers.
Definition: arena.h:327
tbb::internal::arena_base::my_exit_monitors
concurrent_monitor my_exit_monitors
Waiting object for master threads that cannot join the arena.
Definition: arena.h:263
tbb::internal::generic_scheduler::commit_spawned_tasks
void commit_spawned_tasks(size_t new_tail)
Makes newly spawned tasks visible to thieves.
Definition: scheduler.h:710
tbb::internal::generic_scheduler::is_task_pool_published
bool is_task_pool_published() const
Definition: scheduler.h:628
tbb::internal::generic_scheduler::local_spawn_root_and_wait
void local_spawn_root_and_wait(task *first, task *&next)
Definition: scheduler.cpp:718
tbb::internal::generic_scheduler::release_task_pool
void release_task_pool() const
Unlocks the local task pool.
Definition: scheduler.cpp:522
tbb::internal::es_task_proxy
Tag for v3 task_proxy.
Definition: scheduler_common.h:174
tbb::internal::arena_slot::allocate_task_pool
void allocate_task_pool(size_t n)
Definition: scheduler_common.h:387
tbb::internal::es_task_is_stolen
Set if the task has been stolen.
Definition: scheduler_common.h:178
tbb::internal::generic_scheduler::local_wait_for_all
virtual void local_wait_for_all(task &parent, task *child)=0
tbb::task_group_context::isolated
Definition: task.h:367
tbb::internal::scheduler_state::my_arena
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
tbb::internal::mail_inbox::is_idle_state
bool is_idle_state(bool value) const
Indicate whether thread that reads this mailbox is idle.
Definition: mailbox.h:218
tbb::internal::arena_slot_line1::task_pool
task **__TBB_atomic task_pool
Definition: scheduler_common.h:339
tbb::internal::generic_scheduler::prepare_task_pool
size_t prepare_task_pool(size_t n)
Makes sure that the task pool can accommodate at least n more elements.
Definition: scheduler.cpp:439
tbb::task_group_context::default_traits
Definition: task.h:380
tbb::internal::generic_scheduler::plugged_return_list
static task * plugged_return_list()
Special value used to mark my_return_list as not taking any more entries.
Definition: scheduler.h:458
tbb::internal::allocate_scheduler
generic_scheduler * allocate_scheduler(market &m, bool genuine)
Definition: scheduler.cpp:37
tbb::internal::generic_scheduler::generic_scheduler
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
__TBB_store_release
#define __TBB_store_release
Definition: tbb_machine.h:860
tbb::internal::arena::advertise_new_work
void advertise_new_work()
If necessary, raise a flag that there is new job in arena.
Definition: arena.h:484
tbb::task::executing
task is running, and will be destroyed after method execute() completes.
Definition: task.h:626
tbb::internal::scheduler_state::my_arena_index
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:79
new_size
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t new_size
Definition: ittnotify_static.h:163
tbb::internal::generic_scheduler::assert_task_pool_valid
void assert_task_pool_valid() const
Definition: scheduler.h:398
tbb::internal::arena_slot_line1::my_scheduler
generic_scheduler * my_scheduler
Scheduler of the thread attached to the slot.
Definition: scheduler_common.h:333
tbb::internal::market::worker_stack_size
size_t worker_stack_size() const
Returns the requested stack size of worker threads.
Definition: market.h:314
tbb::internal::generic_scheduler::cleanup_scheduler
void cleanup_scheduler()
Cleans up this scheduler (the scheduler might be destroyed).
Definition: scheduler.cpp:294
tbb::internal::generic_scheduler::leave_task_pool
void leave_task_pool()
Leave the task pool.
Definition: scheduler.cpp:1258
tbb::internal::generic_scheduler::my_small_task_count
__TBB_atomic intptr_t my_small_task_count
Number of small tasks that have been allocated by this scheduler.
Definition: scheduler.h:461
EmptyTaskPool
#define EmptyTaskPool
Definition: scheduler.h:46
tbb::internal::generic_scheduler::get_task
task * get_task(__TBB_ISOLATION_EXPR(isolation_tag isolation))
Get a task from the local pool.
Definition: scheduler.cpp:1010
tbb::internal::task_prefix::ref_count
__TBB_atomic reference_count ref_count
Reference count used for synchronization.
Definition: task.h:263
tbb::internal::task_proxy::mailbox_bit
static const intptr_t mailbox_bit
Definition: mailbox.h:31
tbb::internal::small_local_task
Bitwise-OR of local_task and small_task.
Definition: scheduler_common.h:196
tbb::internal::mail_inbox::attach
void attach(mail_outbox &putter)
Attach inbox to a corresponding outbox.
Definition: mailbox.h:193
sync_cancel
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p sync_cancel
Definition: ittnotify_static.h:102
tbb::internal::generic_scheduler::steal_task_from
task * steal_task_from(__TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation))
Steal task from another scheduler's ready pool.
Definition: scheduler.cpp:1142
tbb::internal::__TBB_store_relaxed
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
tbb::internal::arena::SNAPSHOT_EMPTY
static const pool_state_t SNAPSHOT_EMPTY
No tasks to steal since last snapshot was taken.
Definition: arena.h:318
tbb::internal::es_ref_count_active
Set if ref_count might be changed by another thread. Used for debugging.
Definition: scheduler_common.h:176
__TBB_FetchAndDecrementWrelease
#define __TBB_FetchAndDecrementWrelease(P)
Definition: tbb_machine.h:314
tbb::internal::task_proxy::location_mask
static const intptr_t location_mask
Definition: mailbox.h:32
tbb::internal::generic_scheduler::destroy
void destroy()
Destroy and deallocate this scheduler object.
Definition: scheduler.cpp:285
GATHER_STATISTIC
#define GATHER_STATISTIC(x)
Definition: tbb_statistics.h:232
ITT_SYNC_CREATE
#define ITT_SYNC_CREATE(obj, type, name)
Definition: itt_notify.h:115
tbb::internal::affinity_id
unsigned short affinity_id
An id as used for specifying affinity.
Definition: task.h:128
tbb::internal::generic_scheduler::prepare_for_spawning
task * prepare_for_spawning(task *t)
Checks if t is affinitized to another thread, and if so, bundles it as proxy.
Definition: scheduler.cpp:595
tbb::internal::arena::mailbox
mail_outbox & mailbox(affinity_id id)
Get reference to mailbox corresponding to given affinity_id.
Definition: arena.h:305
tbb::internal::governor::sign_off
static void sign_off(generic_scheduler *s)
Unregister TBB scheduler instance from thread-local storage.
Definition: governor.cpp:145
tbb::internal::__TBB_store_with_release
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
tbb::internal::arena::my_slots
arena_slot my_slots[1]
Definition: arena.h:390
tbb::internal::generic_scheduler::attach_arena
void attach_arena(arena *, size_t index, bool is_master)
Definition: arena.cpp:80
tbb::internal::generic_scheduler::attach_mailbox
void attach_mailbox(affinity_id id)
Definition: scheduler.h:667
tbb::internal::no_isolation
const isolation_tag no_isolation
Definition: task.h:133
__TBB_ISOLATION_ARG
#define __TBB_ISOLATION_ARG(arg1, isolation)
Definition: scheduler_common.h:68
lock
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void * lock
Definition: ittnotify_static.h:121
__TBB_Yield
#define __TBB_Yield()
Definition: ibm_aix51.h:44
tbb::internal::generic_scheduler::is_worker
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:673
tbb::task::freed
task object is on free list, or is going to be put there, or was just taken off.
Definition: task.h:634
tbb::internal::arena_slot_line2::my_task_pool_size
size_t my_task_pool_size
Capacity of the primary task pool (number of elements - pointers to task).
Definition: scheduler_common.h:366
tbb::internal::arena_base::my_market
market * my_market
The market that owns this arena.
Definition: arena.h:232
tbb::internal::arena_base::my_num_reserved_slots
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:253
tbb::internal::arena_slot::fill_with_canary_pattern
void fill_with_canary_pattern(size_t, size_t)
Definition: scheduler_common.h:384
tbb::internal::mail_inbox::set_is_idle
void set_is_idle(bool value)
Indicate whether thread that reads this mailbox is idle.
Definition: mailbox.h:211
tbb::internal::governor::sign_on
static void sign_on(generic_scheduler *s)
Register TBB scheduler instance in thread-local storage.
Definition: governor.cpp:124
tbb::internal::scheduler_properties::worker
static const bool worker
Definition: scheduler.h:51
tbb::internal::generic_scheduler::publish_task_pool
void publish_task_pool()
Used by workers to enter the task pool.
Definition: scheduler.cpp:1246
__TBB_control_consistency_helper
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:60
tbb::internal::generic_scheduler::is_proxy
static bool is_proxy(const task &t)
True if t is a task_proxy.
Definition: scheduler.h:348
tbb::internal::generic_scheduler::lock_task_pool
task ** lock_task_pool(arena_slot *victim_arena_slot) const
Locks victim's task pool, and returns pointer to it. The pointer can be NULL.
Definition: scheduler.cpp:537
tbb::internal::generic_scheduler::unlock_task_pool
void unlock_task_pool(arena_slot *victim_arena_slot, task **victim_task_pool) const
Unlocks victim's task pool.
Definition: scheduler.cpp:586
parent
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id parent
Definition: ittnotify_static.h:176
tbb::internal::arena_slot_line2::tail
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
Definition: scheduler_common.h:363
tbb::internal::isolation_tag
intptr_t isolation_tag
A tag for task isolation.
Definition: task.h:132
tbb::internal::first
auto first(Container &c) -> decltype(begin(c))
Definition: _range_iterator.h:34
tbb::internal::scheduler_state::my_innermost_running_task
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:88
tbb::internal::scheduler_state::my_arena_slot
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
tbb::internal::arena_base::my_limit
atomic< unsigned > my_limit
The maximal number of currently busy slots.
Definition: arena.h:161
__TBB_get_bsp
void * __TBB_get_bsp()
Retrieves the current RSE backing store pointer. IA64 specific.
tbb::internal::generic_scheduler::is_quiescent_local_task_pool_empty
bool is_quiescent_local_task_pool_empty() const
Definition: scheduler.h:639
tbb::internal::no_cache
Disable caching for a small task.
Definition: scheduler_common.h:198
s
void const char const char int ITT_FORMAT __itt_group_sync s
Definition: ittnotify_static.h:91
tbb::internal::generic_scheduler::is_version_3_task
static bool is_version_3_task(task &t)
Definition: scheduler.h:146
tbb::internal::scheduler_properties::outermost
bool outermost
Indicates that a scheduler is on outermost level.
Definition: scheduler.h:57
tbb::task::allocated
task object is freshly allocated or recycled.
Definition: task.h:632
tbb::internal::governor::assume_scheduler
static void assume_scheduler(generic_scheduler *s)
Temporarily set TLS slot to the given scheduler.
Definition: governor.cpp:116
tbb::internal::suppress_unused_warning
void suppress_unused_warning(const T1 &)
Utility template function to prevent "unused" warnings by various compilers.
Definition: tbb_stddef.h:398
tbb::internal::governor::local_scheduler
static generic_scheduler * local_scheduler()
Obtain the thread-local instance of the TBB scheduler.
Definition: governor.h:129
tbb::internal::task_prefix::owner
scheduler * owner
Obsolete. The scheduler that owns the task.
Definition: task.h:236
task
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
Definition: ittnotify_static.h:119
tbb::internal::governor::is_set
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
tbb::internal::task_prefix::next
tbb::task * next
"next" field for list of task
Definition: task.h:286
tbb::internal::task_prefix::extra_state
unsigned char extra_state
Miscellaneous state that is not directly visible to users, stored as a byte for compactness.
Definition: task.h:281
tbb::internal::arena_slot_line1::head
__TBB_atomic size_t head
Index of the first ready task in the deque.
Definition: scheduler_common.h:343
tbb::internal::scheduler_state::my_inbox
mail_inbox my_inbox
Definition: scheduler.h:90
poison_value
#define poison_value(g)
Definition: scheduler_common.h:235
tbb::internal::scheduler_state::my_properties
scheduler_properties my_properties
Definition: scheduler.h:101
tbb::task::ready
task is in ready pool, or is going to be put there, or was just taken off.
Definition: task.h:630
tbb::internal::FastRandom::get
unsigned short get()
Get a random number.
Definition: tbb_misc.h:146
tbb::internal::generic_scheduler::free_nonlocal_small_task
void free_nonlocal_small_task(task &t)
Free a small task t that that was allocated by a different scheduler.
Definition: scheduler.cpp:412
tbb::internal::local_task
Task is known to have been allocated by this scheduler.
Definition: scheduler_common.h:190
tbb::internal::generic_scheduler::min_task_pool_size
static const size_t min_task_pool_size
Definition: scheduler.h:369
tbb::internal::as_atomic
atomic< T > & as_atomic(T &t)
Definition: atomic.h:572
tbb::internal::generic_scheduler::is_local_task_pool_quiescent
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:633
tbb::internal::generic_scheduler::deallocate_task
void deallocate_task(task &t)
Return task object to the memory allocator.
Definition: scheduler.h:683
tbb::internal::task_prefix::isolation
isolation_tag isolation
The tag used for task isolation.
Definition: task.h:209
tbb::internal::reference_count
intptr_t reference_count
A reference count.
Definition: task.h:120
tbb::internal::free_task_hint
free_task_hint
Optimization hint to free_task that enables it omit unnecessary tests and code.
Definition: scheduler_common.h:186
tbb::internal::assert_task_valid
void assert_task_valid(const task *)
Definition: scheduler_common.h:237
tbb::internal::task_prefix::origin
scheduler * origin
The scheduler that allocated the task, or NULL if the task is big.
Definition: task.h:228
tbb::internal::generic_scheduler::allocate_task
task & allocate_task(size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
Allocate task object, either from the heap or a free list.
Definition: scheduler.cpp:337
sync_releasing
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
Definition: ittnotify_static.h:104
tbb::internal::concurrent_monitor::notify_one
void notify_one()
Notify one thread about the event.
Definition: concurrent_monitor.h:157
tbb::internal::generic_scheduler::my_market
market * my_market
The market I am in.
Definition: scheduler.h:172
tbb::atomic_fence
void atomic_fence()
Sequentially consistent full memory fence.
Definition: tbb_machine.h:342
tbb::internal::arena_base::my_pool_state
tbb::atomic< uintptr_t > my_pool_state
Current task pool state and estimate of available tasks amount.
Definition: arena.h:195
p
void const char const char int ITT_FORMAT __itt_group_sync p
Definition: ittnotify_static.h:91
tbb::internal::generic_scheduler::is_quiescent_local_task_pool_reset
bool is_quiescent_local_task_pool_reset() const
Definition: scheduler.h:644
tbb::internal::generic_scheduler::my_ref_count
long my_ref_count
Reference count for scheduler.
Definition: scheduler.h:190
tbb::internal::NFS_Free
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
Definition: cache_aligned_allocator.cpp:198
tbb::internal::generic_scheduler::commit_relocated_tasks
void commit_relocated_tasks(size_t new_tail)
Makes relocated tasks visible to thieves and releases the local task pool.
Definition: scheduler.h:719
tbb::internal::task_prefix::context
task_group_context * context
Shared context that is used to communicate asynchronous state changes.
Definition: task.h:219
tbb::internal::market::adjust_demand
void adjust_demand(arena &, int delta)
Request that arena's need in workers should be adjusted.
Definition: market.cpp:557
tbb::internal::generic_scheduler::my_random
FastRandom my_random
Random number generator used for picking a random victim from which to steal.
Definition: scheduler.h:175
tbb::internal::scheduler_properties::master
static const bool master
Definition: scheduler.h:52
tbb::internal::arena_base::my_num_slots
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:250
tbb::internal::generic_scheduler::acquire_task_pool
void acquire_task_pool() const
Locks the local task pool.
Definition: scheduler.cpp:493
tbb::internal::arena::work_spawned
Definition: arena.h:285
tbb::internal::task_prefix_reservation_size
const size_t task_prefix_reservation_size
Number of bytes reserved for a task prefix.
Definition: scheduler_common.h:159
tbb::internal::generic_scheduler::my_stealing_threshold
uintptr_t my_stealing_threshold
Position in the call stack specifying its maximal filling when stealing is still allowed.
Definition: scheduler.h:155
tbb::internal::NFS_Allocate
void *__TBB_EXPORTED_FUNC NFS_Allocate(size_t n_element, size_t element_size, void *hint)
Allocate memory on cache/sector line boundary.
Definition: cache_aligned_allocator.cpp:176
tbb::internal::generic_scheduler::quick_task_size
static const size_t quick_task_size
If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd.
Definition: scheduler.h:144
tbb::internal::small_task
Task is known to be a small task.
Definition: scheduler_common.h:193
h
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function h
Definition: ittnotify_static.h:159
tbb::internal::__TBB_load_relaxed
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
tbb::internal::generic_scheduler::outermost_level
bool outermost_level() const
True if the scheduler is on the outermost dispatch level.
Definition: scheduler.h:649
tbb::internal::generic_scheduler::my_free_list
task * my_free_list
Free list of small tasks that can be reused.
Definition: scheduler.h:178
tbb::internal::scheduler_properties::type
bool type
Indicates that a scheduler acts as a master or a worker.
Definition: scheduler.h:54
__TBB_CONTEXT_ARG
#define __TBB_CONTEXT_ARG(arg1, context)
Definition: scheduler_common.h:60

Copyright © 2005-2020 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.