Document 6537043

Transcription

Document 6537043
SAMPLE QUESTIONS IN ALGORITHM ANALYSIS
Understanding analysis of algorithms
Let's start with a simple algorithm (the book does a different simple algorithm, maximum).
Algorithm innerProduct
Input: Non-negative integer n and two integer arrays A and B of size n.
Output: The inner product of the two arrays.
prod = 0
for i = 0 to n-1 do
prod = prod + A[i]*B[i]
return prod
•
•
•
•
•
•
•
•
•
Line 1 is one op (assigning a value).
Loop initializing is one op (assigning a value).
Line 3 is five ops per iteration (mult, add, 2 array refs, assign).
Line 3 is executed n times; total is 5n.
Loop incrementation is two ops (an addition and an assignment)
Loop incrementation is done n times; total is 2n.
Loop termination test is one op (a comparison i<n).
Loop termination is done n+1 times (n successes, one failure); total is n+1.
Return is one op.
The total is thus 1+1+5n+2n+(n+1)+1 = 8n+4.
findFactorial(n) {int factorial = 1; // set initial value of factorial to 1int iterator = 1; // set
initial value of loop iterator to 1while (iterator <= n) {
factorial = factorial * iterator;
iterator = iterator + 1;} // end of while ()System.out.println("The factorial is "
+ factorial);
}
.
.
.
.
.
.
• We perform two variable initializations and two assignments before the while loop
• We check the loop condition n+1 times
• We go into the while loop n times
• We perform two assignments, and two arithmetic operations each time
• We perform one print statement
• The running time, T(n) = 4 + (n+1) + n*(4) + 1 T(n) = 5n + 6
Imagine that for a problem we have a choice of using program 1 which has a running time
T (n) = 40*n + 10
1
Page 1 of 17
and program 2 which has a running time of
2
T (n) = 3*n
2
Let’s examine what this means for different values of n
»
»
•
•
•
•
•
T1(n) = 40*n + 10T2(n) = 3*n2
If program 1 and 2 are two different methods for finding a patient ID within the database
of a small practice with 12 patients (i.e., n = 12) which program would you choose?
Would your choice be different if you knew that the practice would expand to include up to
100 patients?
Program 2 has a running time that increases fairly quickly as n gets larger than 12
• Program 1 has a running time that grows much more slowly as n increases
• Even if the speed of the computer hardware on which we are running both programs
doubles, T1(n) remains a better choice than T2(n) for large n
• For large collections of data such as can be found in electronic medical records, etc.
improving hardware speeds is no substitute for improving the efficiency of algorithms that
may need to manipulate the data in such collections
.The precise running time of a program depends on the particular computer used. Constant
factors for a particular computer include:
.– the average number of machine language instructions the assembler for that computer
produces
.– the average number of machine language instructions the computer executes in one second
.
• The Big-Oh notation is designed to help us focus on the non-constant portions of the
running time
Page 2 of 17
.
• Instead of saying that the factorial program studied has running time T(n) = 5n + 6, we
say it takes O(n) time (dropping the 5 and 6 from 5n + 6)
The Big-Oh notation allows us to
– ignore unknown constants associated with the computer
– make simplifying assumptions about the amount of time used up by an invocation of a simple
programming statement
If
– f(n) is a mathematical function on the non-negative integers (i.e., n = 0,1,2,3,4,5,…),
and
– T(n) is a function with a non-negative value (possibly corresponding to the running time of
some program)We say that
T(n) is O(f(n)) if T(n) is at most a constant times f(n) for most values of n
greater than some base-line n
0
Formally:
T(n) is O(f(n)) if there exists a non-negative integer n and a constant c > 0 such that for all
0
integers n >= n , T(n) <= c*f(n)
0
For program 1 in our previous example T(0) = 10, T(1) = 50, and T(n) = 40n + 10 generally. We
can say that T(n) is O(n) because for n = 10, n >= n and c = 41,
0
0
40n + 10 <= 41n (this is because for n>=10, 40n + 10 <= 40n + n)
For program 2 in our previous example T(0) = 0,
Page 3 of 17
2
2
T(1) = 3, and T(n) = 3n generally. We can say that T(n) is O(n ) because for n = 0, n >= n and c
0
2
0
2
= 3, 3n <= 3n
Below is a code to compute the quadratic equation in java, our goal is to estimate the number of instructions
that the program executes for a given input size N. There are two main parts of the program: reading in the
input and find the pair whose sum is closest to zero. We focus on the latter part because it dominates the
running time. (See Exercise XYZ for an analysis of the first part.)
long best = Long.MAX_VALUE;
for (int i = 0; i < N; i++) {
for (int j = i+1; j < N; j++) {
long sum = a[i] + a[j];
if (Math.abs(sum) < Math.abs(best))
best = sum;
}
}
For simplicity, we will assume that each operation (variable declaration, assignment, increment, sum,
absolute value, comparison) [array access??] takes one step.
•
The first statement consists of 1 variable declaration and 1 assignment statement.
•
The for loop over i: The initialization consists of 1 variable declaration (i) and 1 assignment
statement (i = 0); the loop continuation condition requires N+1 comparisons (N of which evaluate
to true, and one false); the increment part occurs N times.
•
The for loop over j: The j loop itself is executed N times, once for each value of i. This means that
we must do each of the following operations N times: declare j, compute i+1, and initialize j, for a
total of 3N steps. Now we analyze the total number of times that the increment statement (j++) is
execute. When i = 0 the j loop iterates N-1 times; when i = 1 the j loop iterates N-2 times; and so
forth. Overall, this is N-1 + N-2 + ... + 1 = N(N-1)/2 times. This sum arises frequently in computer
science because it is the number of distinct pairs of N elements. The loop continuation condition is
executed once more per loop than the increment statement, so there are a total of N(N-1)/2 + N
comparisons.
•
The body of the j loop: The body is executed once for each distinct pair of N elements. As we've
seen, this is N(N-1)/2 times. The body consists of one variable declaration, one addition, one
comparison, two absolute values, and either one or two assignment statements (depending on the
result of the comparison) for a total of between 6N(N-1)/2 and 7N(N-1)/2 steps.
Summing up all steps leads to: 2 + (2 + N+1 + N) + (3N + N(N-1)/2 + N + N(N-1)/2) + (7N(N-1)/2) = 5 +
1.5N + 4.5N2.
Page 4 of 17
Order of growth.
Computer scientists use order of growth notation to simplify the expressions that arise in the analysis of
algorithms. Informally, the order of growth is the term that grows the fastest as N increases, ignoring the
leading coefficient. For example, we determined that the double loop of Quadratic.java takes 5 + 1.5N +
4.5N2 steps. The order of growth of this program is Θ(N2). Disregarding lower order terms is justified since
we are primarily interested in running times for large values of N, in which case the effect of the leading
term overwhelms the smaller terms. We can partially justify disregarding the leading coefficient because it
is measured in the number of steps. But we are really interested in the running time (in seconds). We really
should be weighting each step by the actual time it takes to execute that type of instruction on a particular
machine and with a particular compiler. Formally, this notation means that there exist constants 0 < a ≤ b
such that the running time is between aN2 and bN2 for all positive integers N. We can choose a = 4.5 and b
= 11 in the example above.
We use order of growth notation since it is a simple but powerful model of running time. For example, if an
algorithm has Θ(N2) running time, then we expect the running time to quadruple if we double the size of the
problem. Order of growth is usually easier to calculate than meticulously trying to count the total number of
steps. We now consider an order of growth analysis of the program Cubic.java. It takes a command line
argument N, reads in N long integers, and finds the triple of values whose sum is closest to 0. Although this
problem seems contrived, it is deeply related to many problem in computational geometry (see section xyz).
The main computation loop is shown below.
long best = Long.MAX_VALUE;
for (int i = 0; i < N; i++) {
for (int j = i+1; j < N; j++) {
for (int k = j+1; k < N; k++) {
long sum = a[i] + a[j] + a[k];
if (Math.abs(sum) < Math.abs(best)) best = sum;
}
}
}
The bottleneck is iterating over all triples of integers. There are N choose 3 = N(N-1)(N-2)/6 ways to select
3 of N integers. Thus, the order of growth is &Theta(N3) or cubic. If we double the size of the problem, we
we should expect the running time to go up eightfold.
Q+A
Q. How long does retrieving the length of an array take?
A. Constant time - it's length is stored as a separate variable.
Q. How long do string operations take?
Page 5 of 17
A. The methods length, charAt, and substring take constant time. The methods toLowerCase and replace
take linear time. The methods compareTo, startsWith, and indexOf take time proportional to the number of
characters needed to resolve the answer (constant in the best case and linear in the worst case). String
concatenation takes time proportional to the total number of characters in the result. The array value
contains a reference to the sequence of characters. The string that is represented consists of the characters
value[offset] through value[offset+count-1].
Q. Why does Java need so many fields to represent a string?
A. The array value contains a reference to the sequence of characters. The string that is represented consists
of the characters value[offset] through value[offset+count-1]. The variable hash is used to cache the hash
code of the string so that it need not be computed more than once. Java implements a string this way so that
the substring method can reuse the character array without requiring extra memory (beyond the 24 bytes of
header information).
Q. Why does allocating an array of size N take time proportional to N?
A. In Java, all array elements are automatically initialized to default values (0, false, or null). In principle,
this could be a constant time operation if the system defers initialization of each element until just before
the program accesses that element for the first time.
Q. Should I perform micro-optimizations? Loop-unrolling, in-lining functions.
A. "Premature optimization is the root of all evil" - C.A.R. Hoare. Micro-optimizations are very rarely
useful, especially when they come at the expense of code readability. Modern compilers are typically much
better at optimizing code than humans. In fact, hand-optimized code can confuse the compiler and result in
slower code. Instead you should focus on using correct algorithms and data structures.
Q. Is the loop for (int i = N-1; i >= 0; i--) more efficient than for (int i = 0; i < N; i++)?
A. Some programmers think so (because it simplifies the loop continuation expression), but in many cases it
is actually less efficient. Don't do it unless you have a good reason for doing so.
Q. Any automated tools for profiling a program?
A. If you execute with he -Xprof option, you will obtain all kinds of information.
Page 6 of 17
% java -Xprof Quadratic 5000 < input1000000.txt
Flat profile of 3.18 secs (163 total ticks): main
Interpreted + native Method
0.6% 0 + 1 sun.misc.URLClassPath$JarLoader.getJarFile
0.6% 0 + 1 sun.nio.cs.StreamEncoder$CharsetSE.writeBytes
0.6% 0 + 1 sun.misc.Resource.getBytes
0.6% 0 + 1 java.util.jar.JarFile.initializeVerifier
0.6% 0 + 1 sun.nio.cs.UTF_8.newDecoder
0.6%
3.7%
1 +
1 +
0
5
java.lang.String.toLowerCase
Total interpreted
Compiled + native Method
88.3% 144 + 0 Quadratic.main
1.2% 2 + 0 StdIn.readString
0.6% 1 + 0 java.lang.String.charAt
0.6% 1 + 0 java.io.BufferedInputStream.read
0.6% 1 + 0 java.lang.StringBuffer.length
0.6% 1 + 0 java.lang.Integer.parseInt
92.0% 150 + 0 Total compiled
For our purposes, the most important piece of information is the number of seconds listed in the "flat
profile." In this case, the profiler says our program took 3.18 seconds. Running it a second times may yield
an answer of 3.28 or 3.16 since the measurement is not perfectly accurate. We repeat this experiment for
different inputs of size 10,000 and also for inputs of sizes 20,000, 40,000 and 80,000. The results are
summarized in the table and plot below.
THIS ALGORITHM IS WRONG!!
If n=0, we access A[0] and B[0], which do not exist. The original version returns zero as the inner product
of empty arrays, which is arguably correct. The best fix is perhaps to change Non-negative to Positive in the
Input specification. Let's call this algorithm innerProductBetterFixed.
What about if statements?
Algorithm countPositives
Input: Non-negative integer n and an integer array A of size n.
Output: The number of positive elements in A
pos ← 0
for i ← 0 to n-1 do
if A[i] > 0 then
pos ← pos + 1
return pos
•
•
Line 1 is one op.
Loop initialization is one op
Page 7 of 17
•
•
•
•
•
Loop termination test is n+1 ops
The if test is performed n times; each is 2 ops
Return is one op
The update of pos is 2 ops but is done ??? times.
What do we do?
Let U be the number of updates done.
•
•
•
•
•
The total number of steps is 1+1+(n+1)+2n+1+2U = 4+3n+2U.
The best case (i.e., lowest complexity) occurs when U=0 (i.e., no numbers are positive) and gives a
complexity of 4+3n.
The worst case occurs when U=n (i.e., all numbers are positive) and gives a complexity of 4+5n.
To determine the average case result is much harder as it requires knowing the input distribution
(i.e., are positive numbers likely) and requires probability theory.
We will primarily study worst case complexity.
1.1.4 Analyzing Recursive Algorithms
Consider a recursive version of innerProduct. If the arrays are of size 1, the answer is clearly A[0]B[0]. If
n>1, we recursively get the inner product of the first n-1 terms and then add in the last term.
Algorithm innerProductRecursive
Input: Positive integer n and two integer arrays A and B of size n.
Output: The inner product of the two arrays
if n=1 then
return A[0]B[0]
return innerProductRecursive(n-1,A,B) + A[n-1]B[n-1]
How many steps does the algorithm require? Let T(n) be the number of steps required.
•
•
•
•
•
•
•
If n=1 we do a comparison, two (array) fetches, a product, and a return.
So T(1)=5.
If n>1, we do a comparison, a subtraction, a method call, the recursive computation, two fetches, a
product, a sum and a return.
So T(n) = 1 + 1 + 1 + T(n-1) + 2 + 1 + 1 + 1 = T(n-1)+8.
This is called a recurrence equation. In general these are quite difficult to solve in closed form, i.e.
without T on the right hand side.
For this simple recurrence, one can see that T(n)=8n-3 is the solution.
We will learn more about recurrences later.
Question
How many times will command(); execute in the following code?
(a)
for i from 1 to 10 do
for j from 1 to 20 do
command();
end do;
end do;
(b)
for i from 1 to 10 do
Page 8 of 17
for j from 1 to i do
command();
end do;
end do;
(c)
i:=0;
j:=0;
while (i<11) do
i:=i+1;
while (j<21) do
j:=j+1;
command();
end do;
end do;
(d)
for i from 1 to x do
for j from 1 to y do
command();
for j from 1 to i;
command();
end do;
end do;
command();
end do;
complexity
COMPLEXITY
Since computers have different processing capabilities, it is more meaningful to represent the speed of an
algorithm by the number of times a command is executed rather then the time it takes to complete the
algorithm. This representation is called complexity. The complexity of an algorithm is a function that
relates the number of executions in a procedure to the loops that govern these executions.
Consider the code:
]>procedure1:=proc(n)
local i;
for i from 1 to n do
command();
end do;
end proc;
The number of times command is executed is directly related to the size of n. A function modeling this
relation would be f(n) = n , where f(n)represents the number of times command is evoked. If a machine
took two minutes to execute command it would take (2 minutes)*f(n) to run the procedure.
In complexity we say that proc1 is O(n), (big-oh of n), or that the running time is governed by a linear
relation.
DETERMING COMPLEXITY OF MORE COMPLICATED PROGRAMS
Page 9 of 17
The following examples will further demonstrate an algorithms complexity.
Example 1:
]>procedure2:=proc2(n){
local i,j;
for i from 1 to n
for j from 1 to n
command();
end do;
end do;
n2
for i from 1 to 10000000
command();
command();
end do;
end proc;
10000000
f(n)=n^2+10000000 that corresponds to O(n^2).
Example 2:
procedure3:=proc(n)
local i,j;
for i from 1 to n
command();
command();
for j from 1 to n
command();
end do;
end do;
procedure2(n);
end proc;
2n
2n + n2
n2
n2
f(n)=2n+2n^2 that corresponds to O(n^2).
WORST CASE SCENARIO
Realistically we do not have command(); laid out in plain sight for us. Let us consider the long division
algorithm from the section before, what is its complexity?
Well first let us fix y, the number that we are dividing into, what is the worst-case scenario, or the scenario
where we will have to do the most amount of computation? The answer to this is when x is equal to one, if
this is the case we will have to loop y times. From this we can conclude that at worst we have to carry out y
computations which corresponds to O(y).
Page 10 of 17
When there are many possible scenarios to consider we will always pick the worse case. This guarantees
that the big-oh bound we choose will always be sufficient.
COMPLEXITY OF BUBBLE SORT
When the ith pass begins, the (i-1) largest elements are guaranteed to be in the correct positions. During this
pass, (n-i) comparisons are used. Consequently, the total number of comparisons used by the bubble sort to
order a list of n elements is:
n −1
(n − 1) + (n − 2) + ... + 2 + 1 =
∑n
n =1
=
(n − 1)(n)
2
⎛(n − 1)(n)⎞
2
⎟ = O n .
⎝
⎠
2
So we conclude that the complexity of bubble sort is: O ⎜
( )
recall selection sort algorithm
• develop code for sorting an array a[0], a[1], .. ., a[n-1]
for (top = n-1 ; top > 0; top--) /* Line 1 */
{
largeLoc = 0; /* Line 2 */
for (i = 1; i <= top; i++) /* Line 3 */
if (a[i] > a[largeLoc]) /* Line 4 */
largeLoc = i; /* Line 5 */
temp = a[top]; /* Line 6 */
a[top] = a[largeLoc]; /* Line 7 */
a[largeLoc] = temp; /* Line 8 */
}
33
To determine complexity, suppose statement on line i requires ti time units Lines 1 and 2 are
executed n − 1 times
o time required: (n − 1)(t1 + t2)
Lines 3 and 4 are executed (n − 1) + (n − 2) + · · ·+ 1 = ½ n(n − 1) times
o time required: ½ n(n − 1)(t3 + t4)
Line 5 is executed some fraction, p5 of the times for Line 4
o time required: 1/2n(n − 1)p5t5
Lines 6, 7, 8 are executed n − 1 times
o time required: (n − 1)(t6 + t7 + t8)
• Total time:
• rearranging terms, we get
• Note that selection sort in slides is not exactly the same as the one developed here point out differences
Page 11 of 17
PROBLEM SET 2
Question 1:
What are the complexities of the loops given in Question 1 from problem set 1.
Question 2:
Give the complexity of the algorithm outlines in Question 3 from problem set 1. As a point of interest this
algorithm is called “The Linear Search Algorithm”, why do you think this is.
PROBLEM SET 3 (STUDY)
The following is a pseudo-code description of the Binary Search Algorithm.
procedure binary search (x : list of integers in increasing order)
i=1
j=n
while i<j
m=(i+j)/2 rounded down to the closer integer
if x > a[m] then i=m+1
else j=m
end if
end while
if x=a[i] then location=i
else location = 0
end if
end procedure binary search
Question 1:
Implement this algorithm into Maple.
Question 2:
Determine how the algorithm works by printing out the list at various places in the procedure.
Question 3:
Determine this procedures complexity
MORE QUESTIONS
Exercises
1. Write a program Quartic.java that takes a command line parameter N, reads in N long integers from
standard input, and find the 4-tuple whose sum is closest to zero. Use a quadruple loop. What is the
order of growth of your program? Estimate the largest N that your program can handle in 1 hour.
2. Write a program that takes two command line parameters N and x, reads in N long integers from
standard input, and find the 3-tuple whose sum is closest to the target value x.
Page 12 of 17
3. Empirically estimate the running time of each of the following two code fragment as a function of
N.
String s = "";
for (int i = 0; i < N; i++) {
if (Math.random() < 0.5) s += '0';
else
s += '1';
}
StringBuffer sb = new StringBuffer();
for (int i = 0; i < N; i++) {
if (Math.random() < 0.5) sb.append('0');
else
sb.append('1');
}
String s = sb.toString();
The first code fragment takes time proportional to N2, whereas the second one takes time
proportional to N.
4. Suppose the running time of an algorithm on inputs of size 1,000, 2,000, 3,000, and 4,000 is 5
seconds, 20 seconds, 45 seconds, 80 seconds, and 125 seconds, respectively. Estimate how long it
will take to solve a problem of size 5,000. Is the algorithm have linear, linearithmic, quadratic,
cubic, or exponential?
5. Empirically estimate the running time of the following code fragment as a function of N.
public static int f(int n) {
if (n == 0) return 1;
else
return f(n-1) + f(n-1);
}
6. Each of the three Java functions below takes a nonnegative n as input, and returns a string of length
N = 2n with all a's. Determine the asymptotic complexity of each function. Recall that concatenating
two strings in Java takes time proportional to the sum of their lengths.
public static String method1(int n) {
String s = "a";
for (int i = 0; i < n; i++)
s = s + s;
return s;
}
public static String method2(int n) {
String s = "";
int N = 1 << n; // 2^n
Page 13 of 17
for (int i = 0; i < N; i++)
s = s + "a";
return s;
}
public static String method3(int n) {
if (n == 0) return "a";
else return method3(n-1) + method3(n-1);
}
7. Each of the three Java functions from Repeat.java below takes a nonnegative N as input, and
returns a string of length N with all x's. Determine the asymptotic complexity of each function.
Recall that concatenating two strings in Java takes time proportional to the sum of their lengths.
public static String method1(int N) {
if (N == 0) return "";
String temp = method1(N / 2);
if (N % 2 == 0) return temp + temp;
else
return temp + temp + "x";
}
public static String method2(int N) {
String s = "";
for (int i = 0; i < N; i++)
s = s + "x";
return s;
}
public static String method3(int N) {
if
(N == 0) return "";
else if (N == 1) return "x";
else return method3(N / 2) + method3(N - (N / 2));
}
public static String method4(int N) {
char[] temp = new char[N];
for (int i = 0; i < N; i++)
temp[i] = 'x';
return new String(temp);
}
8. Write a program Linear.java that takes a command line integer N, reads in N long integers from
standard input, and finds the value that is closest to 0. How many instructions are executed in the
data processing loop?
Page 14 of 17
long best = Long.MAX_VALUE;
for (int i = 0; i < N; i++) {
long sum = a[i];
if (Math.abs(sum) < Math.abs(best))
best = sum;
}
9. Given an order of growth analysis of the input loop of program Quadratic.java.
int N = Integer.parseInt(args[0]);
long[] a = new long[N];
for (int i = 0; i < N; i++)
a[i] = StdIn.readLong();
Answer: linear time. The bottlenecks are the array initialization and the input loop.
10. Analyze the following code fragment mathematically and determine whether the running time is
linear, quadratic, or cubic.
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
if (i == j) c[i][j] = 1.0;
else
c[i][j] = 0.0;
11. Analyze the following code fragment mathematically and determine whether the running time is
linear, quadratic, or cubic.
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
for (int k = 0; k < N; k++)
c[i][j] += a[i][k] * b[k][j];
12. The following code fragment (which appears in a Java programming book) creates a random
permutation of the integers from 1 to N. Estimate how long it takes a function of N.
Page 15 of 17
int[] a = new int[N];
boolean[] taken = new boolean[N];
Random random = new Random();
int count = 0;
while (count < N) {
int r = random.nextInt(N);
if (!taken[r]) {
a[r] = count;
taken[r] = true;
count++;
}
}
13. Repeat the previous exercise using the shuffling method from program Shuffle.java described in
Section 2.5.
int[] a = new int[N];
Random = new Random();
for (int i = 0; i < N; i++) {
int r = random.nextInt(i+1);
a[i] = a[r];
a[r] = i;
}
14. What is the running time of the following function that reverses a string s of length N?
public static String reverse(String s) {
int N = s.length();
String reverse = "";
for (int i = 0; i < N; i++)
reverse = s.charAt(i) + reverse;
return reverse;
}
15. What is the running time of the following function that reverses a string s of length N?
public static String reverse(String s) {
int N = s.length();
if (N <= 1) return s;
String left = s.substring(0, N/2);
String right = s.substring(N/2, N);
return reverse(right) + reverse(left);
}
16. Give an O(N) algorithm for reversing a string. Hint: use an extra char array.
Page 16 of 17
public static String reverse(String s) {
int N = s.length();
char[] a = new char[N];
for (int i = 0; i < N; i++)
a[i] = s.charAt(N-i-1);
String reverse = new String(a);
return reverse;
}
17. The following function returns a random string of length N. How long does it take?
public static String random(int N) {
if (N == 0) return "";
int r = (int) (26 * Math.random()); // between 0 and 25
char c = 'a' + r;
// between 'a' and 'z'
return random(N/2) + c + random(N - N/2 - 1);
}
18. What is the value of x after running the following code fragment?
int x = 0;
for (int i = 0; i < N; i++)
for (int j = i + 1; j < N; j++)
for (int k = j + 1; k < N; k++)
x++;
Answer: N choose 3 = N(N-1)(N-2)/3!.
Page 17 of 17